In the last decade, we have witnessed a renaissance of Deep Learning models. Nowadays, they are widely used in industrial as well as scientific fields, and noticeably, these models reached super-human per-formances on specific tasks such as image classification. Unfortunately, despite their great success, it has been shown that they are vulnerable to adversarial attacks-images to which a specific amount of noise imper-ceptible to human eyes have been added to lead the model to a wrong decision. Typically, these malicious images are forged, pursuing a misclas-sification goal. However, when considering the task of Face Recognition (FR), this principle might not be enough to fool the system. Indeed, in the context FR, the deep models are generally used merely as features ex-tractors while the final task of recognition is accomplished, for example, by similarity measurements. Thus, by crafting adversarials to fool the classifier, it might not be sufficient to fool the overall FR pipeline. Start-ing from this observation, we proposed to use a k-Nearest Neighbour algorithm as guidance to craft adversarial attacks against an FR system. In our study, we showed how this kind of attack could be more threaten-ing for an FR system than misclassification-based ones considering both the targeted and untargeted attack strategies.

KNN-guided Adversarial Attacks

Massoli FV;Falchi F;Amato G
2020

Abstract

In the last decade, we have witnessed a renaissance of Deep Learning models. Nowadays, they are widely used in industrial as well as scientific fields, and noticeably, these models reached super-human per-formances on specific tasks such as image classification. Unfortunately, despite their great success, it has been shown that they are vulnerable to adversarial attacks-images to which a specific amount of noise imper-ceptible to human eyes have been added to lead the model to a wrong decision. Typically, these malicious images are forged, pursuing a misclas-sification goal. However, when considering the task of Face Recognition (FR), this principle might not be enough to fool the system. Indeed, in the context FR, the deep models are generally used merely as features ex-tractors while the final task of recognition is accomplished, for example, by similarity measurements. Thus, by crafting adversarials to fool the classifier, it might not be sufficient to fool the overall FR pipeline. Start-ing from this observation, we proposed to use a k-Nearest Neighbour algorithm as guidance to craft adversarial attacks against an FR system. In our study, we showed how this kind of attack could be more threaten-ing for an FR system than misclassification-based ones considering both the targeted and untargeted attack strategies.
2020
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI
k-nearest neighbour
adversarial machine learning
deep learning
adversarial examples
machine learning
File in questo prodotto:
File Dimensione Formato  
prod_445014-doc_160027.pdf

accesso aperto

Descrizione: Open access
Tipologia: Versione Editoriale (PDF)
Dimensione 1.13 MB
Formato Adobe PDF
1.13 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/423473
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact