In the past few years, Federated Learning (FL) has emerged as an effective approach for training neural networks (NNs) over a computing network while preserving data privacy. Most of the existing FL approaches require the user to define a priori the same structure for all the NNs running on the clients, along with an explicit aggregation procedure. This can be a limiting factor in cases where pre-defining such algorithmic details is difficult. To overcome these issues, we propose a novel approach to FL, which leverages Neuroevolution running on the clients. This implies that the NN structures may be different across clients, hence providing better adaptation to the local data. Furthermore, in our approach, the aggregation is implicitly accomplished on the client side by exploiting the information about the models used on the other clients, thus allowing the emergence of optimal NN architectures without needing an explicit aggregation. We test our approach on three datasets, showing that very compact NNs can be obtained without significant drops in performance compared to canonical FL. Moreover, we show that such compact structures allow for a step towards explainability, which is highly desirable in domains such as digital health, from which the tested datasets come.

NEvoFed: A Decentralized Approach to Federated NeuroEvolution of Heterogeneous Neural Networks

De Falco I.;Scafuri U.
2024

Abstract

In the past few years, Federated Learning (FL) has emerged as an effective approach for training neural networks (NNs) over a computing network while preserving data privacy. Most of the existing FL approaches require the user to define a priori the same structure for all the NNs running on the clients, along with an explicit aggregation procedure. This can be a limiting factor in cases where pre-defining such algorithmic details is difficult. To overcome these issues, we propose a novel approach to FL, which leverages Neuroevolution running on the clients. This implies that the NN structures may be different across clients, hence providing better adaptation to the local data. Furthermore, in our approach, the aggregation is implicitly accomplished on the client side by exploiting the information about the models used on the other clients, thus allowing the emergence of optimal NN architectures without needing an explicit aggregation. We test our approach on three datasets, showing that very compact NNs can be obtained without significant drops in performance compared to canonical FL. Moreover, we show that such compact structures allow for a step towards explainability, which is highly desirable in domains such as digital health, from which the tested datasets come.
2024
Istituto di Calcolo e Reti ad Alte Prestazioni - ICAR
federated learning
neuroevolution
supervised learning
File in questo prodotto:
File Dimensione Formato  
NEvoFed-A-Decentralized-Approach-to-Federated-NeuroEvolution-of-Heterogeneous-Neural-NetworksGECCO-2024--Proceedings-of-the-2024-Genetic-and-Evolutionary-Computation-Conference.pdf

solo utenti autorizzati

Tipologia: Versione Editoriale (PDF)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 247.3 kB
Formato Adobe PDF
247.3 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/515388
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact