Supervised classification models, such as SVM, aim at predicting the class membership of the incoming samples. Malicious inputs are designed to cheat a vulnerable classifier, leading to a wrong prediction. We focus our analysis on the search of the smallest perturbations of samples producing a failure of the classification process. The novelty of our approach is in the use of the zero-pseudo-norm, which consists in minimizing the number of attributes to be modified. We come out with an optimization problem whose objective function is a Difference of Convex functions (DC). We present the results of some preliminary experiments.

Sparse Optimization in Adversarial Support Vector Machine (SVM)

Annabella Astorino;
2021

Abstract

Supervised classification models, such as SVM, aim at predicting the class membership of the incoming samples. Malicious inputs are designed to cheat a vulnerable classifier, leading to a wrong prediction. We focus our analysis on the search of the smallest perturbations of samples producing a failure of the classification process. The novelty of our approach is in the use of the zero-pseudo-norm, which consists in minimizing the number of attributes to be modified. We come out with an optimization problem whose objective function is a Difference of Convex functions (DC). We present the results of some preliminary experiments.
2021
Istituto di Calcolo e Reti ad Alte Prestazioni - ICAR
Sparse Optimization
SVM
Adversarial machine learning
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/446536
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact