In recent years, automotive systems have been integrating Federated Learning (FL) tools to provide enhanced driving functionalities, exploiting sensor data at connected vehicles to cooperatively learn assistance information for safety and maneuvering systems. Conventional FL policies require a central coordinator, namely a Parameter Server (PS), to orchestrate the learning process which limits the scalability and robustness of the training platform. Consensus-driven FL methods, on the other hand, enable fully decentralized learning implementations where vehicles mutually share the Machine Learning (ML) model parameters, possibly via Vehicle-to-Everything (V2X) networking, at the expense of larger communication resource consumption compared to vanilla FL approaches. This paper proposes a communication-efficient consensus-driven FL design tailored for the training of Deep Neural Networks (DNN) in vehicular networks. The vehicles taking part in the FL process independently select a pre-determined percentage of model parameters to be quantized and exchanged on each training round. The proposed technique is validated on a cooperative sensing use case where vehicles rely on Lidar point clouds to detect possible road objects/users in their surroundings via DNN. The validation considers latency, accuracy and communication efficiency trade-offs. Experimental results highlight the impact of parameter selection and quantization on the communication overhead in varying settings.

Communication-efficient Distributed Learning in V2X Networks: Parameter Selection and Quantization

Savazzi Stefano;
2022

Abstract

In recent years, automotive systems have been integrating Federated Learning (FL) tools to provide enhanced driving functionalities, exploiting sensor data at connected vehicles to cooperatively learn assistance information for safety and maneuvering systems. Conventional FL policies require a central coordinator, namely a Parameter Server (PS), to orchestrate the learning process which limits the scalability and robustness of the training platform. Consensus-driven FL methods, on the other hand, enable fully decentralized learning implementations where vehicles mutually share the Machine Learning (ML) model parameters, possibly via Vehicle-to-Everything (V2X) networking, at the expense of larger communication resource consumption compared to vanilla FL approaches. This paper proposes a communication-efficient consensus-driven FL design tailored for the training of Deep Neural Networks (DNN) in vehicular networks. The vehicles taking part in the FL process independently select a pre-determined percentage of model parameters to be quantized and exchanged on each training round. The proposed technique is validated on a cooperative sensing use case where vehicles rely on Lidar point clouds to detect possible road objects/users in their surroundings via DNN. The validation considers latency, accuracy and communication efficiency trade-offs. Experimental results highlight the impact of parameter selection and quantization on the communication overhead in varying settings.
2022
Istituto di Elettronica e di Ingegneria dell'Informazione e delle Telecomunicazioni - IEIIT
9781665435406
Artificial Intelligence
Connected automated driving
Distributed processing
Federated Learning
V2X
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/461030
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? ND
social impact