Graph Neural Networks (GNNs) have emerged as powerful tools for learning on graph-structured data, demonstrating state-of-the-art performance in various applications such as social network analysis, biological network modeling, and recommendation systems. However, the computational complexity of GNNs poses significant challenges for scalability, particularly with large-scale graphs. Parallelism in GNNs addresses this issue by distributing computation across multiple processors, using techniques such as data parallelism and model parallelism. Data parallelism involves partitioning the graph data across different processors, while model parallelism splits the neural network’s layers or operations. These parallelization strategies, along with optimizations such as asynchronous updates and efficient communication protocols, enable GNNs to handle larger graphs and improve training efficiency. This work explores the key computational kernels and looks for those where parallelism could significantly enhance the scalability and performance of GNNs, highlighting the algebraic aspects of each one. This is the first step to better compare recent advances and their implications for future research.

Parallelism in GNN: Possibilities and Limits of Current Approaches

Luisa Carracciuolo;Diego Romano
2025

Abstract

Graph Neural Networks (GNNs) have emerged as powerful tools for learning on graph-structured data, demonstrating state-of-the-art performance in various applications such as social network analysis, biological network modeling, and recommendation systems. However, the computational complexity of GNNs poses significant challenges for scalability, particularly with large-scale graphs. Parallelism in GNNs addresses this issue by distributing computation across multiple processors, using techniques such as data parallelism and model parallelism. Data parallelism involves partitioning the graph data across different processors, while model parallelism splits the neural network’s layers or operations. These parallelization strategies, along with optimizations such as asynchronous updates and efficient communication protocols, enable GNNs to handle larger graphs and improve training efficiency. This work explores the key computational kernels and looks for those where parallelism could significantly enhance the scalability and performance of GNNs, highlighting the algebraic aspects of each one. This is the first step to better compare recent advances and their implications for future research.
2025
Istituto per i Polimeri, Compositi e Biomateriali - IPCB
9783031856990
9783031857003
parallel computing, deep learning, Graph Neural Networks, performance, neural networks
File in questo prodotto:
File Dimensione Formato  
Pubblicato.pdf

solo utenti autorizzati

Descrizione: Parallelism in GNN: Possibilities and Limits of Current Approaches
Tipologia: Versione Editoriale (PDF)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 212.16 kB
Formato Adobe PDF
212.16 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/541561
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact