Research evaluation is undergoing a profound transformation, and it is now widely recognised that the true value of a researcher’s contribution extends far beyond the sheer volume of papers published in scientific outlets. Yet, despite the growing adoption of revised CV templates and assessment frameworks across many organisations participating in the research ecosystem, a critical gap remains: the lack of structured, interoperable metadata to represent the full spectrum of scholarly contributions. Essential contributions—such as organising conferences, mentoring, teaching, serving on scientific boards, or engaging in collaborative projects—are often undocumented or scattered across ephemeral sources, e.g. emails, web pages, or printouts. Without a robust system for capturing and preserving this information, much of the valuable scholarly record risks being lost as digital content is deleted, websites are updated or decommissioned, or institutional memory fades. To address this challenge, we propose piloting a suite of tools and services that harness the power of Scientific Knowledge Graphs (SKGs), Semantic Web technologies, and Artificial Intelligence. These tools will empower researchers applying for evaluation to capture, persist, and reference their diverse contributions in a CV-ready, machine-readable, and compelling format—on demand and with minimal friction. AI can complement this picture by assisting evaluands in generating narrative sections and impact stories, drafting text and retrieving supporting evidence online. Even more so, by aligning with SKG interoperability standards, this approach will enable the cross-institutional and transnational exchange of evaluation data, paving the way for a more streamlined, verifiable, and up-to-date research assessment process, which will reduce reliance on manual data entry, enhance transparency, and support the principles of Open Science and responsible research evaluation. This research endeavour—posing challenges including dynamic data collection and collation, data provenance and quality, data certification and reliability, generative AI—is not just a technical development; rather, it lays the foundations for a more inclusive, accurate, and future-proof evaluation ecosystem.

Towards an infrastructure for responsible research assessment data management

Mannocci A.;Candela L.;Manghi P.
2025

Abstract

Research evaluation is undergoing a profound transformation, and it is now widely recognised that the true value of a researcher’s contribution extends far beyond the sheer volume of papers published in scientific outlets. Yet, despite the growing adoption of revised CV templates and assessment frameworks across many organisations participating in the research ecosystem, a critical gap remains: the lack of structured, interoperable metadata to represent the full spectrum of scholarly contributions. Essential contributions—such as organising conferences, mentoring, teaching, serving on scientific boards, or engaging in collaborative projects—are often undocumented or scattered across ephemeral sources, e.g. emails, web pages, or printouts. Without a robust system for capturing and preserving this information, much of the valuable scholarly record risks being lost as digital content is deleted, websites are updated or decommissioned, or institutional memory fades. To address this challenge, we propose piloting a suite of tools and services that harness the power of Scientific Knowledge Graphs (SKGs), Semantic Web technologies, and Artificial Intelligence. These tools will empower researchers applying for evaluation to capture, persist, and reference their diverse contributions in a CV-ready, machine-readable, and compelling format—on demand and with minimal friction. AI can complement this picture by assisting evaluands in generating narrative sections and impact stories, drafting text and retrieving supporting evidence online. Even more so, by aligning with SKG interoperability standards, this approach will enable the cross-institutional and transnational exchange of evaluation data, paving the way for a more streamlined, verifiable, and up-to-date research assessment process, which will reduce reliance on manual data entry, enhance transparency, and support the principles of Open Science and responsible research evaluation. This research endeavour—posing challenges including dynamic data collection and collation, data provenance and quality, data certification and reliability, generative AI—is not just a technical development; rather, it lays the foundations for a more inclusive, accurate, and future-proof evaluation ecosystem.
2025
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI
Research Evaluation, Research Assessment, Scientometrics, Open Science
File in questo prodotto:
File Dimensione Formato  
ISTI_day_2025_Poster Mannocci et al.pdf

accesso aperto

Descrizione: Abstract and Poster
Licenza: Creative commons
Dimensione 2.04 MB
Formato Adobe PDF
2.04 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/559248
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact