2018 TowardsDeepLearningModelsResist
- (Madry et al., 2018) ⇒ Aleksander Mądry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. (2018). “Towards Deep Learning Models Resistant to Adversarial Attacks.” In: Proceedings of the 6th International Conference on Learning Representations, (ICLR-2018).
Subject Headings: Adversarial Attack.
Notes
Cited By
2018
- https://openreview.net/forum?id=rJzIBfZAb
- QUOTE: This paper presents new results on adversarial training, using the framework of robust optimization. Its minimax nature allows for principled methods of both training and attacking neural networks.
The reviewers were generally positive about its contributions, despite some concerns about 'overclaiming'. The AC recommends acceptance, and encourages the authors to also relate this work with the concurrent ICLR submission (/forum?id=Hk6kPgZA-) which addresses the problem using a similar approach.
- QUOTE: This paper presents new results on adversarial training, using the framework of robust optimization. Its minimax nature allows for principled methods of both training and attacking neural networks.
Quotes
Abstract
Recent work has demonstrated that deep neural networks are vulnerable to adversarial examples --- inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models. Code and pre-trained models are available at this https URL and this https URL.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2018 TowardsDeepLearningModelsResist | Aleksander Mądry Aleksandar Makelov Ludwig Schmidt Dimitris Tsipras Adrian Vladu | Towards Deep Learning Models Resistant to Adversarial Attacks |