Threat of adversarial attacks on deep learning in computer vision: A survey, IEEE Access, vol.6, pp.14410-14430, 2018. ,
Towards evaluating the robustness of neural networks, 2017 IEEE Symposium on Security and Privacy (SP), pp.39-57, 2017. ,
Robustness of classifiers: from adversarial to random noise, Advances in Neural Information Processing Systems, pp.1632-1640, 2016. ,
, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples, 2014.
Imagenet classification with deep convolutional neural networks, Advances in neural information processing systems, pp.1097-1105, 2012. ,
, Adversarial machine learning at scale, 2016.
Towards deep learning models resistant to adversarial attacks, 2017. ,
Deepfool: a simple and accurate method to fool deep neural networks, Proceedings of the IEEE conference on computer vision and pattern recognition, pp.2574-2582, 2016. ,
On the effectiveness of defensive distillation, 2016. ,
Distillation as a defense to adversarial perturbations against deep neural networks, 2016 IEEE Symposium on Security and Privacy (SP), pp.582-597, 2016. ,
Regularizing neural networks by penalizing confident output distributions, 2017. ,
Label smoothing and logit squeezing: A replacement for adversarial training, 2018. ,
Darts: Deceiving autonomous cars with toxic signs, 2018. ,
Rethinking the inception architecture for computer vision, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.2818-2826, 2016. ,
, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks, 2013.
A boundary tilting persepective on the phenomenon of adversarial examples, 2016. ,
, The space of transferable adversarial examples, 2017.
, Robustness may be at odds with accuracy. stat, vol.1050, p.11, 2018.
Adversarial perturbations of deep neural networks, 2016. ,
, Adversarial examples: Opportunities and challenges, 2018.
Improvement of generalization ability of deep cnn via implicit regularization in two-stage training process, IEEE Access, vol.6, pp.15844-15869, 2018. ,