Context and motivation: Large language models (LLMs) are increasingly used to address requirements engineering (RE) tasks, including trace-link recovery, legal compliance, model generation, and others. Question/problem: Most of the existing studies rely on static, non-adaptive prompting strategies that do not fully harness the models’ capabilities. Specifically, these studies overlook the potential of automatic prompting engineering (APE), a technique that allows LLMs to self-generate and fine-tune prompts to improve task performance. Principal ideas/results: This research preview aims to study the effectiveness of APE techniques in LLM-powered RE tasks. As a preliminary step, we perform a benchmarking study in which we compare APE techniques with more traditional prompting solutions for the task of requirements classification. Our results show that, on average and with some exceptions, APE outperforms the baselines. We outline research avenues, including the evaluation and tailoring of APE for other RE tasks, and considering the human-in-the-loop. Contribution: To the best of our knowledge, this is the first study to introduce APE in RE, paving the way for a deeper exploration of LLMs’ potential in this field.

Automatic prompt engineering: the case of requirements classification

Zadenoori M. A.;Ferrari A.
2025

Abstract

Context and motivation: Large language models (LLMs) are increasingly used to address requirements engineering (RE) tasks, including trace-link recovery, legal compliance, model generation, and others. Question/problem: Most of the existing studies rely on static, non-adaptive prompting strategies that do not fully harness the models’ capabilities. Specifically, these studies overlook the potential of automatic prompting engineering (APE), a technique that allows LLMs to self-generate and fine-tune prompts to improve task performance. Principal ideas/results: This research preview aims to study the effectiveness of APE techniques in LLM-powered RE tasks. As a preliminary step, we perform a benchmarking study in which we compare APE techniques with more traditional prompting solutions for the task of requirements classification. Our results show that, on average and with some exceptions, APE outperforms the baselines. We outline research avenues, including the evaluation and tailoring of APE for other RE tasks, and considering the human-in-the-loop. Contribution: To the best of our knowledge, this is the first study to introduce APE in RE, paving the way for a deeper exploration of LLMs’ potential in this field.
2025
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI
9783031885303
9783031885310
Large language models
LLM
Natural language processing
NLP
Prompt engineering
Requirements engineering
File in questo prodotto:
File Dimensione Formato  
Ferrari_AutomaticPromptEngineering_2025.pdf

solo utenti autorizzati

Descrizione: Automatic Prompt Engineering: The Case of Requirements Classification
Tipologia: Versione Editoriale (PDF)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 290.76 kB
Formato Adobe PDF
290.76 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/558083
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 0
social impact