Andriushchenko, M., Croce, F., Marion, N.F. and Hein, M. (2020). Square attack: A query-efficient black-box adversarial attack via random search, European Conference on Computer Vision, Glasgow, UK, pp. 484–501.
Athalye, A., Carlini, N. and Wagner, D. (2018). Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples, International Conference on Machine Learning, Stockholm, Sweden, pp. 274–283.
Badjie, B., Cec´ılio, J. and Casimiro, A. (2023). Denoising autoencoder-based defensive distillation as an adversarial robustness algorithm, CoRR: abs/2303.15901.
Bertolace, A., Gatsis, K. and Margellos, K. (2024). Robust optimization for adversarial learning with finite sample complexity guarantees, CoRR: abs/2403.15207.
Croce, F. and Hein, M. (2020a). Minimally distorted adversarial examples with a fast adaptive boundary attack, International Conference on Machine Learning, pp. 2196–2205, (virtual event).
Croce, F. and Hein, M. (2020b). Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks, International Conference on Machine Learning, pp. 2206–2216, (virtual event).
Ding, X., Zhang, X., Zhou, Y., Han, J., Ding, G. and Sun, J. (2022). Scaling up your kernels to 31x31: Revisiting large kernel design in CNNs, CoRR: abs/2203.06717.
Fawzi, A., Moosavi-Dezfooli, S., Frossard, P. and Soatto, S. (2018). Empirical study of the topology and geometry of deep networks, 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, pp. 3762–3770.
Ge, Z.,Wang, X., Liu, H., Shang, F. and Liu, Y. (2023). Boosting adversarial transferability by achieving flat local maxima, Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, USA.
Goodfellow, I., Shlens, J. and Szegedy, C. (2015). Explaining and harnessing adversarial examples, International Conference on Learning Representations, San Diego, USA.
Guo, S., Li, X., Zhu, P. and Mu, Z. (2023). Ads-detector: An attention-based dual stream adversarial example detection method, Knowledge-Based Systems 265: 110388.
Huang, B., Chen, M., Wang, Y., Lu, J., Cheng, M. and Wang, W. (2023). Boosting accuracy and robustness of student models via adaptive adversarial distillation, IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Vancouver, Canada, pp. 24668–24677.
Jea, K.C. and Young, D.M. (1980). Generalized conjugate gradient acceleration of non-symmetrizable iterative methods, Linear Algebra and Its Applications 34: 159–194.
Jetly, S., Lord, N. and Torr, P. (2018). With friend like this, who need adversaries?, 2018 Conference on Neural Information Processing Systems, Montr´eal, Canada, pp. 10772–10782.
Jin, G., Yi, X., Huang, W., Schewe, S. and Huang, X. (2022). Enhancing adversarial training with second-order statistics of weights, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, pp. 15273–15283.
Jin, G., Yi, X., Wu, D., Mu, R. and Huang, X. (2023). Randomized adversarial training via Taylor expansion, IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Vancouver, Canada, pp. 16447–16457.
Li, L. and Spratling, M.W. (2023). Understanding and combating robust overfitting via input loss landscape analysis and regularization, Pattern Recognition 136: 109229.
Li, T., Wu, Y., Chen, S., Fang, K. and Huang, X. (2022). Subspace adversarial training, IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, USA, pp. 13399–13408.
Lu, Y., Ren, H., Chai, W., Velipasalar, S. and Li, Y. (2024). Time-aware and task-transferable adversarial attack for perception of autonomous vehicles, Pattern Recognition Letters 178: 145–152.
Lyu, C., Huang, K. and Liang, H. (2015). A unified gradient regularization family for adversarial examples, IEEE International Conference on Data Mining, Atlantic City, USA, pp. 301–309.
Madry, A., Makelov, A., Schmidt, L., Tsipras, D. and Vladu, A. (2017). Towards deep learning models resistant to adversarial attacks, arXiv: 1706.06083.
Moosavi-Dezfooli, S.-M., Fawzi, A., Uesato, J. and Frossard, P. (2019). Robustness via curvature regularization, and vice versa, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, USA, pp. 9070–9078.
Pozdnyakov, V., Kovalenko, A., Makarov, I., Drobyshevskiy, M. and Lukyanov, K. (2024). Adversarial attacks and defenses in automated control systems: A comprehensive benchmark, arXiv: 2403.13502.
Ros, A.S. and Doshi-Velez, F. (2018). Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients, AAAI Conference on Artificial Intelligence, New Orleans, USA, pp. 1660–1669.
Shimonishi, H., MAKI, I., Murase, T. and Murata, M. (2002). Dynamic fair bandwidth allocation for diffserv classes, IEEE International Conference on Communications, ICC 2002, New York, USA, pp. 2348–2352.
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I. and Fergus, R. (2013). Intriguing properties of neural networks, arXiv: 1312.6199.
Tejankar, A., Sanjabi, M., Wang, Q., Wang, S., Firooz, H., Pirsiavash, H. and Tan, L. (2023). Defending against patch-based backdoor attacks on self-supervised learning, IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Vancouver, Canada, pp. 12239–12249.
Wang, H. and Wang, Y. (2022). Self-ensemble adversarial training for improved robustness, 10th International Conference on Learning Representations, ICLR 2022, (virtual event).
Wu, T., Luo, T. and Wunsch II, D.C. (2024). LRS: Enhancing adversarial transferability through Lipschitz regularized surrogate, Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, Canada, pp. 6135–6143.
Yang, X., Liu, C., Xu, L., Wang, Y., Dong, Y., Chen, N., Su, H. and Zhu, J. (2023). Towards effective adversarial textured 3D meshes on physical face recognition, IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Vancouver, BC, Canada, pp. 4119–4128.
Yin, Z., Liu, M., Li, X., Yang, H., Xiao, L. and Zuo, W. (2023). MetaF2N: Blind image super-resolution by learning efficient model adaptation from faces, IEEE/CVF International Conference on Computer Vision, ICCV 2023, Paris, France, pp. 12987–12998.
Zhang, H., Yu, Y., Jiao, J., Xing, E., Ghaoui, L.E. and Jordan, M. (2019). Theoretically principled trade-off between robustness and accuracy, Proceedings of the 36th International Conference on Machine Learning, Long Beach, USA, pp. 7472–7482.
Zhang, J., Qian, W., Nie, R., Cao, J. and Xu, D. (2023). Generate adversarial examples by adaptive moment iterative fast gradient sign method, Applied Intelligence 53(1): 1101–1114.
Zhang, X. (2016). Empirical risk minimization, in C. Sammut and G.I. Webb (Eds), Encyclopedia of Machine Learning and Data Mining, Springer, Berlin/Heidelberg, pp. 392–393.
Zhao, K., Chen, X., Huang, W., Ding, L., Kong, X. and Zhang, F. (2024). Ensemble adversarial defense via integration of multiple dispersed low curvature models, CoRR: abs/2403.16405.