A Brief Comparison Between White Box, Targeted Adversarial Attacks in Deep Neural Networks
DOI:
https://doi.org/10.51408/1963-0091Keywords:
Adversarial Attacks, Robustness, Machine Learning, Deep LearningAbstract
Today, neural networks are used in various domains, in most of which it is critical to have reliable and correct output. This is why adversarial attacks make deep neural networks less reliable to be used in safety-critical areas. Hence, it is important to study the potential attack methods to be able to develop much more robust networks. In this paper, we review four white box, targeted adversarial attacks, and compare them in terms of their misclassification rate, targeted misclassification rate, attack duration, and imperceptibility. Our goal is to find the attack(s), which would be efficient, generate adversarial samples with small perturbations, and be undetectable to the human eye.
References
I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, 2015.
Q. Li, Y. Guo, and H. Chen, “Practical no-box adversarial attacks against dnns,” Preproceedings - Advances in Neural Information Processing Systems, vol. 33, 2020.
A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial machine learning at scale,” in 5th International Conference on Learning Representations, ICLR 2017 – Conference Track Proceedings, 2017.
F. Croce and M. Hein, “Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks,” arXiv preprint arXiv:2003.01690, 2020.
K. Wu, A. Wang, and Y. Yu, “Stronger and faster wasserstein adversarial attacks,” in International Conference on Machine Learning, pp. 10377–10387, PMLR, 2020.
E. Wong, F. R. Schmidt, and J. Zico Kolter, “Wasserstein adversarial examples via projected sinkhorn iterations,” in 36th International Conference on Machine Learning, ICML 2019, 2019.
M. Cuturi, “Sinkhorn distances: Lightspeed computation of optimal transport,” Advances in neural information processing systems, vol. 26, pp. 2292–2300, 2013.
M. Frank, P. Wolfe, et al., “An algorithm for quadratic programming,” Naval research logistics quarterly, vol. 3, no. 1-2, pp. 95–110, 1956.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016.
A. Krizhevsky, G. Hinton, et al., “Learning multiple layers of features from tiny images,” 2009.
M.-I. Nicolae, M. Sinn, M. N. Tran, B. Buesser, A. Rawat, M. Wistuba, V. Zantedeschi, N. Baracaldo, B. Chen, H. Ludwig, I. Molloy, and B. Edwards, “Adversarial robustness toolbox v1.2.0,” CoRR, vol. 1807.01069, 2018.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2022 Grigor V. Bezirganyan and Henrik T. Sergoyan
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.