The widespread adoption of Artificial Intelligence and Machine Learning tools opens to security issues that can raise and occur when the underlying ML models integrated into advanced services. The models, in fact, can be compromised in both the learning and the deployment stage. In this work, we provide an overview of some strenuous security risks and concerns that can affect such models. Our focus is on the research challenges and defense opportunities of the underlying ML framework, when it is devised in specific contexts that can compromise its effectiveness. Specifically, the survey provides an overview of the following emerging topics: Model Watermarking, Information Hiding issues and defense opportunities, Adversarial Learning and model robustness, and Fairness-aware models.

Emerging Challenges and Perspectives in Deep Learning Model Security: A Brief Survey

Luca Caviglione;Carmela Comito;Massimo Guarascio;Giuseppe Manco
2023

Abstract

The widespread adoption of Artificial Intelligence and Machine Learning tools opens to security issues that can raise and occur when the underlying ML models integrated into advanced services. The models, in fact, can be compromised in both the learning and the deployment stage. In this work, we provide an overview of some strenuous security risks and concerns that can affect such models. Our focus is on the research challenges and defense opportunities of the underlying ML framework, when it is devised in specific contexts that can compromise its effectiveness. Specifically, the survey provides an overview of the following emerging topics: Model Watermarking, Information Hiding issues and defense opportunities, Adversarial Learning and model robustness, and Fairness-aware models.
2023
Istituto di Calcolo e Reti ad Alte Prestazioni - ICAR
Istituto di Matematica Applicata e Tecnologie Informatiche - IMATI -
Neural Network fingerprinting
Neural Network watermarking
Data poisoning
Adversarial examples
Fairness
information hiding
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/431479
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact