References
- Akhtar N, Mian A, Kardan N, et al., Advances in adversarial attacks and defenses in computer vision: A survey, IEEE Access, 9, 2021, 155161-155196.
- Bai Y, Zeng Y, Jiang Y, et al., Improving adversarial robustness via channel-wise activation suppressing, arXiv preprint arXiv, 2021, 2103.08307.
- Byun J, Cho S, Kwon M J, et al., Improving the transferability of targeted adversarial examples through object-based diverse input, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 15244-15253.
- Carlini N, Wagner D., Towards evaluating the robustness of neural networks, 2017 ieee symposium on security and privacy (sp), 2017, 39-57.
- Chen Z, Li B, Xu J, et al., Towards practical certifiable patch defense with vision transformer, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 15148–15158.
- Dai T, Feng Y, Wu D, et al., DIPDefend: Deep image prior driven defense against adversarial examples, Proceedings of the 28th ACM International Conference on Multimedia, 2020, 1404-1412.
- Das N, Shanbhogue M, Chen S T, et al., Keeping the bad guys out: Protecting and vaccinating deep learning with jpeg compression, arXiv preprint arXiv, 2017, 1705.02900.
- Deng Z, Yang X, Xu S, et al., Libre: A practical bayesian approach to adversarial detection, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, 972-982.
- Dong J, Moosavi-Dezfooli S M, Lai J, et al., The enemy of my enemy is my friend: Exploring inverse adversaries for improving adversarial training, arXiv preprint arXiv, 2022, 2211.00525.
- Goodfellow I J, Shlens J, Szegedy C., Explaining and harnessing adversarial examples, arXiv preprint arXiv, 2014, 1412.6572.
- He K, Zhang X, Ren S, et al., Deep residual learning for image recognition, Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, 770-778.
- He K, Zhang X, Ren S, et al., Identity mappings in deep residual networks, European conference on computer vision, 2016, 630-645.
- Hu S, Liu X, Zhang Y, et al., Protecting facial privacy: generating adversarial identity masks via style-robust makeup transfer, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 15014-15023.
- Hu Z, Huang S, Zhu X, et al., Adversarial texture for fooling person detectors in the physical world, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 13307-13316.
- Kurakin A, Goodfellow I, Bengio S., Adversarial machine learning at scale, arXiv preprint arXiv, 2016, 1611.01236.
- Li T, Wu Y, Chen S, et al., Subspace adversarial training, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 13409-13418.
- Madry A, Makelov A, Schmidt L, et al., Towards deep learning models resistant to adversarial attacks, arXiv preprint arXiv, 2017, 1706.06083.
- Moosavi-Dezfooli S M, Fawzi A, Frossard P., Deepfool: a simple and accurate method to fool deep neural networks, Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, 2574-2582.
- Papernot N, McDaniel P, Wu X, et al., Distillation as a defense to adversarial perturbations against deep neural networks, 2016 IEEE symposium on security and privacy (SP), 2016, 582-597.
- Qin Y, Frosst N, Sabour S, et al., Detecting and diagnosing adversarial images with class-conditional capsule reconstructions, arXiv preprint arXiv, 2019, 1907.02957.
- Russakovsky O, Deng J, Su H, et al., Imagenet large scale visual recognition challenge, International journal of computer vision, 115, 3, 2015, 211-252.
- Sato T, Shen J, Wang N, et al., Dirty road can attack: Security of deep learning based automated lane centering under {Physical-World} attack, 30th USENIX Security Symposium (USENIX Security 21), 2021, 3309-3326.
- Suryanto N, Kim Y, Kang H, et al., Dta: Physical camouflage attacks using differentiable transformation network, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 15305-15314.
- Szegedy C, Vanhoucke V, Ioffe S, et al., Rethinking the inception architecture for computer vision, Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, 2818-2826.
- Szegedy C, Zaremba W, Sutskever I, et al., Intriguing properties of neural networks, arXiv preprint arXiv, 2013, 1312.6199.
- Tramèr F, Kurakin A, Papernot N, et al., Ensemble adversarial training: Attacks and defenses, arXiv preprint arXiv, 2017, 1705.07204.
- Wang J, Liu A, Yin Z, et al., Dual attention suppression attack: Generate adversarial camouflage in physical world, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, 8565-8574.
- Xie C, Wu Y, Maaten L, et al., Feature denoising for improving adversarial robustness, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, 501-509.
- Yan H, Zhang J, Niu G, et al., CIFS: Improving adversarial robustness of cnns via channel-wise importance-based feature selection, International Conference on Machine Learning, PMLR, 2021, 11693-11703.
- Yuan J, He Z., Ensemble generative cleaning with feedback loops for defending adversarial attacks, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, 581-590.
- Zhang T, Zhu Z., Interpreting adversarially trained convolutional neural networks, International Conference on Machine Learning, PMLR, 2019, 7502-7511.
- Zhong Y, Liu X, Zhai D, et al., Shadows can be dangerous: Stealthy and effective physical-world adversarial attack by natural phenomenon, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 15345-15354.