Supervised classification models, such as SVM, aim at predicting the class membership of the incoming samples. Malicious inputs are designed to cheat a vulnerable classifier, leading to a wrong prediction. We focus our analysis on the search of the smallest perturbations of samples producing a failure of the classification process. The novelty of our approach is in the use of the zero-pseudo-norm, which consists in minimizing the number of attributes to be modified. We come out with an optimization problem whose objective function is a Difference of Convex functions (DC). We present the results of some preliminary experiments.
Sparse Optimization in Adversarial Support Vector Machine (SVM)
Annabella Astorino;
2021
Abstract
Supervised classification models, such as SVM, aim at predicting the class membership of the incoming samples. Malicious inputs are designed to cheat a vulnerable classifier, leading to a wrong prediction. We focus our analysis on the search of the smallest perturbations of samples producing a failure of the classification process. The novelty of our approach is in the use of the zero-pseudo-norm, which consists in minimizing the number of attributes to be modified. We come out with an optimization problem whose objective function is a Difference of Convex functions (DC). We present the results of some preliminary experiments.File in questo prodotto:
Non ci sono file associati a questo prodotto.
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


