N. Akhtar and A. Mian, Threat of adversarial attacks on deep learning in computer vision: A survey, IEEE Access, vol.6, pp.14410-14430, 2018.

N. Carlini and D. Wagner, Towards evaluating the robustness of neural networks, 2017 IEEE Symposium on Security and Privacy (SP), pp.39-57, 2017.

A. Fawzi, P. Seyed-mohsen-moosavi-dezfooli, and . Frossard, Robustness of classifiers: from adversarial to random noise, Advances in Neural Information Processing Systems, pp.1632-1640, 2016.

J. Ian and . Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples, 2014.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, Imagenet classification with deep convolutional neural networks, Advances in neural information processing systems, pp.1097-1105, 2012.

A. Kurakin, I. Goodfellow, and S. Bengio, Adversarial machine learning at scale, 2016.

A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, Towards deep learning models resistant to adversarial attacks, 2017.

A. Seyed-mohsen-moosavi-dezfooli, P. Fawzi, and . Frossard, Deepfool: a simple and accurate method to fool deep neural networks, Proceedings of the IEEE conference on computer vision and pattern recognition, pp.2574-2582, 2016.

N. Papernot and P. Mcdaniel, On the effectiveness of defensive distillation, 2016.

N. Papernot, P. Mcdaniel, X. Wu, S. Jha, and A. Swami, Distillation as a defense to adversarial perturbations against deep neural networks, 2016 IEEE Symposium on Security and Privacy (SP), pp.582-597, 2016.

G. Pereyra, G. Tucker, J. Chorowski, L. Kaiser, and G. Hinton, Regularizing neural networks by penalizing confident output distributions, 2017.

A. Shafahi, A. Ghiasi, F. Huang, and T. Goldstein, Label smoothing and logit squeezing: A replacement for adversarial training, 2018.

C. Sitawarin, A. N. Bhagoji, A. Mosenia, M. Chiang, and P. Mittal, Darts: Deceiving autonomous cars with toxic signs, 2018.

C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, Rethinking the inception architecture for computer vision, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.2818-2826, 2016.

C. Szegedy, W. Zaremba, I. Sutskever, and J. Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks, 2013.

T. Tanay and L. Griffin, A boundary tilting persepective on the phenomenon of adversarial examples, 2016.

F. Tramèr, N. Papernot, I. Goodfellow, D. Boneh, and P. Mcdaniel, The space of transferable adversarial examples, 2017.

D. Tsipras, S. Santurkar, L. Engstrom, A. Turner, and A. Madry, Robustness may be at odds with accuracy. stat, vol.1050, p.11, 2018.

D. Warde-farley, Adversarial perturbations of deep neural networks, 2016.

J. Zhang and X. Jiang, Adversarial examples: Opportunities and challenges, 2018.

Q. Zheng, M. Yang, J. Yang, Q. Zhang, and X. Zhang, Improvement of generalization ability of deep cnn via implicit regularization in two-stage training process, IEEE Access, vol.6, pp.15844-15869, 2018.