Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Adversarial Robustness via Label-Smoothing

Abstract : We study Label-Smoothing as a means for improving adversarial robustness of supervised deep-learning models. After establishing a thorough and unified framework, we propose several variations to this general method: adversarial, Boltzmann and second-best Label-Smoothing methods, and we explain how to construct your own one. On various datasets (MNIST, CIFAR10, SVHN) and models (linear models, MLPs, LeNet, ResNet), we show that Label-Smoothing in general improves adversarial robustness against a variety of attacks (FGSM, BIM, DeepFool, Carlini-Wagner) by better taking account of the dataset geometry. The proposed Label-Smoothing methods have two main advantages: they can be implemented as a modified cross-entropy loss, thus do not require any modifications of the network architecture nor do they lead to increased training times, and they improve both standard and adversarial accuracy.
Document type :
Preprints, Working Papers, ...
Complete list of metadatas

Cited literature [21 references]  Display  Hide  Download
Contributor : Morgane Goibert <>
Submitted on : Monday, January 13, 2020 - 9:40:47 PM
Last modification on : Friday, July 31, 2020 - 10:44:11 AM
Document(s) archivé(s) le : Tuesday, April 14, 2020 - 7:29:54 PM


Files produced by the author(s)


  • HAL Id : hal-02437752, version 1


Morgane Goibert, Elvis Dohmatob. Adversarial Robustness via Label-Smoothing. 2020. ⟨hal-02437752⟩



Record views


Files downloads