Many machine learning (ML) techniques suffer from the drawback that their output (e.g., a classification decision) is not clearly and intuitively connected to their input (e.g., an image). To cope with this issue, several explainable ML techniques have been proposed to, e.g., identify which pixels of an input image had the strongest influence on its classification. However, in distributed scenarios, it is often more important to connect decisions with the information used for the model training and the nodes supplying such information. To this end, in this paper we focus on federated learning and present a new methodology, named node liability in federated learning (NL-FL), which permits to identify the source of the training information that most contributed to a given decision. After discussing NL-FL's cost in terms of extra computation, storage, and network latency, we demonstrate its usefulness in an edge-based scenario. We find that NL-FL is able to swiftly identify misbehaving nodes and to exclude them from the training process, thereby improving learning accuracy.

Towards Node Liability in Federated Learning: Computational Cost and Network Overhead

Francesco Malandrino
Primo
;
Carla Fabiana Chiasserini
Ultimo
2021

Abstract

Many machine learning (ML) techniques suffer from the drawback that their output (e.g., a classification decision) is not clearly and intuitively connected to their input (e.g., an image). To cope with this issue, several explainable ML techniques have been proposed to, e.g., identify which pixels of an input image had the strongest influence on its classification. However, in distributed scenarios, it is often more important to connect decisions with the information used for the model training and the nodes supplying such information. To this end, in this paper we focus on federated learning and present a new methodology, named node liability in federated learning (NL-FL), which permits to identify the source of the training information that most contributed to a given decision. After discussing NL-FL's cost in terms of extra computation, storage, and network latency, we demonstrate its usefulness in an edge-based scenario. We find that NL-FL is able to swiftly identify misbehaving nodes and to exclude them from the training process, thereby improving learning accuracy.
2021
Istituto di Elettronica e di Ingegneria dell'Informazione e delle Telecomunicazioni - IEIIT
federated learning
File in questo prodotto:
File Dimensione Formato  
commag_fl_accountable_final_v2+copy copy.pdf

accesso aperto

Descrizione: Towards Node Liability in Federated Learning: Computational Cost and Network Overhead
Tipologia: Documento in Post-print
Licenza: Altro tipo di licenza
Dimensione 567.06 kB
Formato Adobe PDF
567.06 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/396997
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 3
social impact