Adversarial Learning Algorithm
An Adversarial Learning Algorithm is a machine learning algorithm that are intended to address an adversarial opponent.
- Context:
- It can (typically) handle Adversarial Examples that the attacker has intentionally designed to cause the model to make a mistake
- See: ANTIDOTE Algorithm, Evasion Attack, Poisoning Attack, GAN Training Algorithm.
References
2017
- (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Adversarial_machine_learning Retrieved:2017-12-9.
- Adversarial machine learning is a research field that lies at the intersection of machine learning and computer security. It aims to enable the safe adoption of machine learning techniques in adversarial settings like spam filtering, malware detection and biometric recognition.
The problem arises from the fact that machine learning techniques were originally designed for stationary environments in which the training and test data are assumed to be generated from the same (although possibly unknown) distribution. In the presence of intelligent and adaptive adversaries, however, this working hypothesis is likely to be violated to at least some degree (depending on the adversary). In fact, a malicious adversary can carefully manipulate the input data exploiting specific vulnerabilities of learning algorithms to compromise the whole system security.
Examples include: attacks in spam filtering, where spam messages are obfuscated through misspelling of bad words or insertion of good words;[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] attacks in computer security, e.g., to obfuscate malware code within network packets [13] or mislead signature detection;[14] attacks in biometric recognition, where fake biometric traits may be exploited to impersonate a legitimate user (biometric spoofing) [15] or to compromise users’ template galleries that are adaptively updated over time.[16] [17]
- Adversarial machine learning is a research field that lies at the intersection of machine learning and computer security. It aims to enable the safe adoption of machine learning techniques in adversarial settings like spam filtering, malware detection and biometric recognition.
2017
- https://blog.openai.com/adversarial-example-research/
- QUOTE: Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they’re like optical illusions for machines. In this post we’ll show how adversarial examples work across different mediums, and will discuss why securing systems against them can be difficult.
2016
- https://mascherari.press/introduction-to-adversarial-machine-learning/
- QUOTE: Adversarial machine learning is a research field that lies at the intersection of machine learning and computer security. All machine learning algorithms and methods are vulnerable to many kinds of threat models. … At the highest level, attacks on machine learning systems can be classified into one of two types: Evasion attacks and Poisoning attacks.
2011
- (Huang et al., 2011) ⇒ Ling Huang, Anthony D. Joseph, Blaine Nelson, Benjamin I.P. Rubinstein, and J. D. Tygar. (2011). “Adversarial Machine Learning.” In: Proceedings of the 4th ACM workshop on Security and artificial intelligence. ISBN:978-1-4503-1003-1 doi:10.1145/2046684.2046692
- QUOTE: In this paper (expanded from an invited talk at AISEC 2010), we discuss an emerging field of study: adversarial machine learning --- the study of effective machine learning techniques against an adversarial opponent. In this paper, we: give a taxonomy for classifying attacks against online machine learning algorithms; discuss application-specific factors that limit an adversary's capabilities; introduce two models for modeling an adversary's capabilities; explore the limits of an adversary's knowledge about the algorithm, feature space, training, and input data; explore vulnerabilities in machine learning algorithms; discuss countermeasures against attacks; introduce the evasion challenge; and discuss privacy-preserving learning techniques.
2010
- (Laskov & Lippmann, 2010) ⇒ Pavel Laskov, and Richard Lippmann. (2010). “Machine Learning in Adversarial Environments.” Machine learning 81, no. 2
2005
- (Lowd & Meek, 2005) ⇒ Daniel Lowd, and Christopher Meek. (2005). “Adversarial Learning.” In: Proceedings of the eleventh ACM SIGKDD International Conference on Knowledge discovery in data mining (KDD-2005)
- ↑ N. Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma. “Adversarial classification”. In Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pages 99–108, Seattle, 2004.
- ↑ D. Lowd and C. Meek. “Adversarial learning”. In A. Press, editor, Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pages 641–647, Chicago, IL., 2005.
- ↑ B. Biggio, I. Corona, G. Fumera, G. Giacinto, and F. Roli. “Bagging classifiers for fighting poisoning attacks in adversarial classification tasks”. In C. Sansone, J. Kittler, and F. Roli, editors, 10th International Workshop on Multiple Classifier Systems (MCS), volume 6713 of Lecture Notes in Computer Science, pages 350–359. Springer-Verlag, 2011.
- ↑ B. Biggio, G. Fumera, and F. Roli. “Adversarial pattern classification using multiple classifiers and randomisation”. In 12th Joint IAPR International Workshop on Structural and Syntactic Pattern Recognition (SSPR 2008), volume 5342 of Lecture Notes in Computer Science, pages 500–509, Orlando, Florida, USA, 2008. Springer-Verlag.
- ↑ B. Biggio, G. Fumera, and F. Roli. “Multiple classifier systems for robust classifier design in adversarial environments”. International Journal of Machine Learning and Cybernetics, 1(1):27–41, 2010.
- ↑ M. Bruckner, C. Kanzow, and T. Scheffer. “Static prediction games for adversarial learning problems”. J. Mach. Learn. Res., 13:2617–2654, 2012.
- ↑ M. Bruckner and T. Scheffer. “Nash equilibria of static prediction games”. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 171–179. 2009.
- ↑ M. Bruckner and T. Scheffer. “Stackelberg games for adversarial prediction problems". In: Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’11, pages 547–555, New York, NY, USA, 2011. ACM.
- ↑ A. Globerson and S. T. Roweis. “Nightmare at test time: robust learning by feature deletion”. In W. W. Cohen and A. Moore, editors, Proceedings of the 23rd International Conference on Machine Learning, volume 148, pages 353–360. ACM, 2006.
- ↑ A. Kolcz and C. H. Teo. “Feature weighting for improved classifier robustness”. In Sixth Conference on Email and Anti-Spam (CEAS), Mountain View, CA, USA, 2009.
- ↑ B. Nelson, M. Barreno, F. J. Chi, A. D. Joseph, B. I. P. Rubinstein, U. Saini, C. Sutton, J. D. Tygar, and K. Xia. “Exploiting machine learning to subvert your spam filter”. In LEET’08: Proceedings of the 1st Usenix Workshop on Large-Scale Exploits and Emergent Threats, pages 1–9, Berkeley, CA, USA, 2008. USENIX Association.
- ↑ G. L. Wittel and S. F. Wu. “On attacking statistical spam filters”. In First Conference on Email and Anti-Spam (CEAS), Microsoft Research Silicon Valley, Mountain View, California, 2004.
- ↑ P. Fogla, M. Sharif, R. Perdisci, O. Kolesnikov, and W. Lee. Polymorphic blending attacks. In USENIX- SS’06: Proc. of the 15th Conf. on USENIX Security Symp., CA, USA, 2006. USENIX Association.
- ↑ J. Newsome, B. Karp, and D. Song. Paragraph: Thwarting signature learning by training maliciously. In Recent Advances in Intrusion Detection, LNCS, pages 81–105. Springer, 2006.
- ↑ R. N. Rodrigues, L. L. Ling, and V. Govindaraju. “Robustness of multimodal biometric fusion methods against spoof attacks". J. Vis. Lang. Comput., 20(3):169–179, 2009.
- ↑ B. Biggio, L. Didaci, G. Fumera, and F. Roli. “Poisoning attacks to compromise face templates.” In 6th IAPR Int’l Conf. on Biometrics (ICB 2013), pages 1–7, Madrid, Spain, 2013.
- ↑ M. Torkamani and D. Lowd “Convex Adversarial Collective Classification”. In: Proceedings of the 30th International Conference on Machine Learning (pp. 642-650), Atlanta, GA., 2013.