Conversational Information Access systems have experienced wide-spread diffusion thanks to the natural and effortless interactionsthey enable with the user. In particular, they represent an effectiveinteraction interface for conversational search (CS) and conversa-tional recommendation (CR) scenarios. Despite their commonali-ties, CR and CS systems are often devised, developed, and evalu-ated as isolated components. Integrating these two elements wouldallow for handling complex information access scenarios, suchas exploring unfamiliar recommended product aspects, enablingricher dialogues, and improving user satisfaction. As of today, thescarce availability of integrated datasets — focused exclusively oneither of the tasks — limits the possibilities for evaluating by-designintegrated CS and CR systems. To address this gap, we proposeCoSRec1, the first dataset for joint Conversational Search and Rec-ommendation (CSR) evaluation. The CoSRec test set includes 20high-quality conversations, with human-made annotations for thequality of conversations, and manually crafted relevance judgmentsfor products and documents. Additionally, we provide supplemen-tary training data comprising partially annotated dialogues and rawconversations to support diverse learning paradigms. CoSRec is the first resource to model CR and CS tasks in a unified framework,enabling the training and evaluation of systems that must shiftbetween answering queries and making suggestions dynamically.
CoSRec: a joint conversational search and recommendation dataset
Alessio M.;Muntean Cristina Ioana;Nardini F. M.;Perego R.;
2025
Abstract
Conversational Information Access systems have experienced wide-spread diffusion thanks to the natural and effortless interactionsthey enable with the user. In particular, they represent an effectiveinteraction interface for conversational search (CS) and conversa-tional recommendation (CR) scenarios. Despite their commonali-ties, CR and CS systems are often devised, developed, and evalu-ated as isolated components. Integrating these two elements wouldallow for handling complex information access scenarios, suchas exploring unfamiliar recommended product aspects, enablingricher dialogues, and improving user satisfaction. As of today, thescarce availability of integrated datasets — focused exclusively oneither of the tasks — limits the possibilities for evaluating by-designintegrated CS and CR systems. To address this gap, we proposeCoSRec1, the first dataset for joint Conversational Search and Rec-ommendation (CSR) evaluation. The CoSRec test set includes 20high-quality conversations, with human-made annotations for thequality of conversations, and manually crafted relevance judgmentsfor products and documents. Additionally, we provide supplemen-tary training data comprising partially annotated dialogues and rawconversations to support diverse learning paradigms. CoSRec is the first resource to model CR and CS tasks in a unified framework,enabling the training and evaluation of systems that must shiftbetween answering queries and making suggestions dynamically.| File | Dimensione | Formato | |
|---|---|---|---|
|
CoSRec_Muntean et al_2025.pdf
accesso aperto
Descrizione: CoSRec: a joint conversational search and recommendation dataset
Tipologia:
Versione Editoriale (PDF)
Licenza:
Creative commons
Dimensione
1.22 MB
Formato
Adobe PDF
|
1.22 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


