Large language models (LLMs) based on transformer architecture have revolutionized natural language processing (NLP), demonstrating excellent capabilities in understanding and generating human-like text. In Software Engineering, LLMs have been applied in code generation, documentation, and report writing tasks, to support the developer and reduce the amount of manual work. In Software Testing, one of the cornerstones of Software Engineering, LLMs have been explored for generating test code, test inputs, automating the oracle process or generating test scenarios. However, their application to high-level testing stages such as system testing, in which a deep knowledge of the business and the technological stack is needed, remains largely unexplored. This paper presents an exploratory study about how LLMs can support system test development. Given that LLM performance depends on input data quality, the study focuses on how to query general purpose LLMs to first obtain test scenarios and then derive test cases from them. The study evaluates two popular LLMs (GPT-4o and GPT-4o-mini), using as a benchmark a European project demonstrator. The study compares two different prompt strategies and employs well-established prompt patterns, showing promising results as well as room for improvement in the application of LLMs to support system testing.
Software system testing assisted by large language models: an exploratory study
Bertolino A.;
2025
Abstract
Large language models (LLMs) based on transformer architecture have revolutionized natural language processing (NLP), demonstrating excellent capabilities in understanding and generating human-like text. In Software Engineering, LLMs have been applied in code generation, documentation, and report writing tasks, to support the developer and reduce the amount of manual work. In Software Testing, one of the cornerstones of Software Engineering, LLMs have been explored for generating test code, test inputs, automating the oracle process or generating test scenarios. However, their application to high-level testing stages such as system testing, in which a deep knowledge of the business and the technological stack is needed, remains largely unexplored. This paper presents an exploratory study about how LLMs can support system test development. Given that LLM performance depends on input data quality, the study focuses on how to query general purpose LLMs to first obtain test scenarios and then derive test cases from them. The study evaluates two popular LLMs (GPT-4o and GPT-4o-mini), using as a benchmark a European project demonstrator. The study compares two different prompt strategies and employs well-established prompt patterns, showing promising results as well as room for improvement in the application of LLMs to support system testing.| File | Dimensione | Formato | |
|---|---|---|---|
|
Bertolino_et al_IICTSS 2024.pdf
solo utenti autorizzati
Descrizione: Software System Testing Assisted by Large Language Models: An Exploratory Study
Tipologia:
Versione Editoriale (PDF)
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
932.09 kB
Formato
Adobe PDF
|
932.09 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


