Neural networks are now used in many sectors of our daily life thanks to efficient solutions such instruments provide for diverse tasks. Leaving to artificial intelligence the chance to make choices on behalf of humans inevitably exposes these tools to be fraudulently attacked. In fact, adversarial examples, intentionally crafted to fool a neural network, can dangerously induce a misclassification though appearing innocuous for a human observer. On such a basis, this paper focuses on the problem of image classification and proposes an analysis to better insight what happens inside a convolutional neural network (CNN) when it evaluates an adversarial example. In particular, the activations of the internal network layers have been analyzed and exploited to design possible countermeasures to reduce CNN vulnerability. Experimental results confirm that layer activations can be adopted to detect adversarial inputs.
Exploiting CNN layer activations to improve adversarial image classification
Carrara F;Falchi F;Amato G
2019
Abstract
Neural networks are now used in many sectors of our daily life thanks to efficient solutions such instruments provide for diverse tasks. Leaving to artificial intelligence the chance to make choices on behalf of humans inevitably exposes these tools to be fraudulently attacked. In fact, adversarial examples, intentionally crafted to fool a neural network, can dangerously induce a misclassification though appearing innocuous for a human observer. On such a basis, this paper focuses on the problem of image classification and proposes an analysis to better insight what happens inside a convolutional neural network (CNN) when it evaluates an adversarial example. In particular, the activations of the internal network layers have been analyzed and exploited to design possible countermeasures to reduce CNN vulnerability. Experimental results confirm that layer activations can be adopted to detect adversarial inputs.| File | Dimensione | Formato | |
|---|---|---|---|
|
prod_422758-doc_150374.pdf
non disponibili
Descrizione: Exploiting CNN layer activations to improve adversarial image classification
Tipologia:
Versione Editoriale (PDF)
Dimensione
469.89 kB
Formato
Adobe PDF
|
469.89 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
|
prod_422758-doc_160005.pdf
accesso aperto
Descrizione: Exploiting CNN layer activations to improve adversarial image classification
Tipologia:
Versione Editoriale (PDF)
Dimensione
451.42 kB
Formato
Adobe PDF
|
451.42 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


