Adversarial Machine Learning
In this I2S seminar series lecture, Ambra Demontis will talk about how to attack machine learning systems as well as how to defense against those attacks.
Hovedinnhold
With the increasing availability of data and computing power, data-driven AI and machine-learning algorithms have recorded an unprecedented success in many different applications. Recent deep-learning algorithms used for perception tasks like image recognition have even surpassed human performance on some specific datasets, such as the famous ImageNet. Despite their accuracy, it is known that skilled attackers can easily mislead those algorithms.
In this talk, Ambra will focus on two of the most famous attacks that can be perpetrated against machine learning systems: evasion and poisoning. Evasion attacks allow the attacker to have a specific sample misclassified, modifying that sample. E.g., the attacker changes his/her malicious program to have it misclassified by a machine learning-based antivirus as legitimate. Poisoning attacks, instead, allows the attacker to have one or more samples misclassified without even modifying those samples. Ambra will start this talk by briefly introducing those attacks and explaining how they can be performed when the attacker has full knowledge of the system that he/she would like to attack (his/her target).
Often, in practice, attackers do not have full knowledge of their target system. E.g., cybersecurity companies usually avoid disclosing details about their antivirus. Interestingly, attackers can often compute effective attacks even without such knowledge. Ambra will explain how and she will talk about some related findings.
Then, she will talk about some challenges and open problems regarding attacks and defenses against those attacks.
Finally, Ambra will present SecML, a library developed by Pluribus One and PRALab group, that allows quickly evaluating the security of a machine learning system against the overmentioned attacks.
During this talk, Ambra will consider different application examples, including object recognition in images and cybersecurity-related tasks such as malware detection.
Place and time: Redaksjonsrom (2nd floor) Media City Bergen, 14:15 to 15:45 Monday 18 November 2019
About the speaker: Ambra Demontis is a Postdoc at the University of Cagliari, Italy, and currently is a member of the PRALab group. She received her M.Sc. degree (Hons.) in Computer Science and her Ph.D. degree in Electronic Engineering and Computer Science from the University of Cagliari, Italy, in 2014 and 2018. Her research interests include secure machine learning, kernel methods, biometrics, and computer security. She has provided significant contributions to the design of secure machine learning systems in the presence of intelligent attackers. Notably, her studies have highlighted different relevant trade-offs between system security and complexity.