This paper proposes a novel approach for multi-party collaborative data analysis problems, where analysis accuracy and divergence are required, as well as both privacy of shared data and explainability of results. The proposed approach aims at trading-off data privacy, decision explainability, and data utility by analytically relating these three measures, evaluating how they impact each other, and proposing a methodology to find the best possible trade-off among them. In particular, given a set of requirements from the participants for a collaborative analysis problem, we propose a method to properly tune the parameters of privacy-preserving mechanisms and explainability techniques to be adopted by all participants, obtaining the best trade-off. The paper is focused on deep learning-based image data analysis problems, though the approach can be generalized to other data types. The $(\epsilon , \delta )$-Differential Privacy and the Autoencoders privacy-preserving techniques have been adopted to preserve data privacy, while the SmoothGrad mechanism has been used to provide decision explainability. The proposed methodology has been validated with a set of experiments on three multi-class deep learning classifiers and three well-known image datasets, MNIST, FER, and CIFAR-10.

Trading-off Privacy, Utility, and Explainability in Deep Learning-based Image Data Analysis

Mori, Paolo;Saracino, Andrea
2024

Abstract

This paper proposes a novel approach for multi-party collaborative data analysis problems, where analysis accuracy and divergence are required, as well as both privacy of shared data and explainability of results. The proposed approach aims at trading-off data privacy, decision explainability, and data utility by analytically relating these three measures, evaluating how they impact each other, and proposing a methodology to find the best possible trade-off among them. In particular, given a set of requirements from the participants for a collaborative analysis problem, we propose a method to properly tune the parameters of privacy-preserving mechanisms and explainability techniques to be adopted by all participants, obtaining the best trade-off. The paper is focused on deep learning-based image data analysis problems, though the approach can be generalized to other data types. The $(\epsilon , \delta )$-Differential Privacy and the Autoencoders privacy-preserving techniques have been adopted to preserve data privacy, while the SmoothGrad mechanism has been used to provide decision explainability. The proposed methodology has been validated with a set of experiments on three multi-class deep learning classifiers and three well-known image datasets, MNIST, FER, and CIFAR-10.
2024
Istituto di informatica e telematica - IIT
Artificial intelligence
Data analysis
Data privacy
Data Privacy
Explainable AI
Loss measurement
Privacy
privacy-preserving Data Analysis
Trustworthy AI
File in questo prodotto:
File Dimensione Formato  
Trading-off_Privacy_Utility_and_Explainability_in_Deep_Learning-based_Image_Data_Analysis.pdf

non disponibili

Tipologia: Versione Editoriale (PDF)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 6.62 MB
Formato Adobe PDF
6.62 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/519217
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact