Large Language Models (LLMs) are increasingly integrated within the software development lifecycle, for instance, to generate code and documentation or to support debugging. As a result, LLMs are expected to become a relevant tool for improving the security posture of modern software supply chains, especially for detecting weaknesses or vulnerabilities, patching code in an automated manner, and explaining runtime behaviors. While LLMs offer substantial productivity benefits, their widespread adoption introduces new security risks. Adversaries can exploit LLM-generated code to introduce or propagate vulnerabilities, potentially compromising the integrity of critical systems. In this paper, we explore how software security can be enhanced by taking advantage of LLMs. Specifically, we outline key research directions focusing on source code attribution, malware variant generation for robust detection, and the use of LLMs to support secure coding practices. To showcase part of our ongoing research, we briefly assess the explainability capabilities of LLMs and their potential to bridge the gap between code and comprehension in modern development workflows.

Large language models in the software supply chain: challenges and opportunities

Giacomo Benedetti;Luca Caviglione;Carmela Comito;Daniela Gallo;Alberto Falcone;Massimo Guarascio;Angelica Liguori
;
Giuseppe Manco;Francesco Sergio Pisani;Ettore Ritacco;Antonino Rullo
2025

Abstract

Large Language Models (LLMs) are increasingly integrated within the software development lifecycle, for instance, to generate code and documentation or to support debugging. As a result, LLMs are expected to become a relevant tool for improving the security posture of modern software supply chains, especially for detecting weaknesses or vulnerabilities, patching code in an automated manner, and explaining runtime behaviors. While LLMs offer substantial productivity benefits, their widespread adoption introduces new security risks. Adversaries can exploit LLM-generated code to introduce or propagate vulnerabilities, potentially compromising the integrity of critical systems. In this paper, we explore how software security can be enhanced by taking advantage of LLMs. Specifically, we outline key research directions focusing on source code attribution, malware variant generation for robust detection, and the use of LLMs to support secure coding practices. To showcase part of our ongoing research, we briefly assess the explainability capabilities of LLMs and their potential to bridge the gap between code and comprehension in modern development workflows.
2025
Istituto di Calcolo e Reti ad Alte Prestazioni - ICAR
Generative AI
Software Security
Large Language Models
Code analysis
File in questo prodotto:
File Dimensione Formato  
Ital-IA_2025_paper_87.pdf

solo utenti autorizzati

Tipologia: Versione Editoriale (PDF)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 1.08 MB
Formato Adobe PDF
1.08 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/559892
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact