The paper presents a case study of the FRIA project, aimed at researching and specifying a methodology to assess the impact of Artificial Intelligence (AI) systems on fundamental rights. In this paper we present a case study on an AI-based hiring system to test the methodology and define the interactions with the final users. The research output is a prototype tool to support and automate the fun-damental rights impact assessment of high-risk AI systems, which aims to comply with the requirements of the European Artificial Intelligence Act. The research methodology is interdisciplinary and based on a collaboration between legal pro-fessionals and computer scientists in the framework of the SoBigData Research Infrastructure (www.sobigdata.eu). It starts from the study of the existing legal and ethical frameworks concerning AI and human rights at the International and European levels and the translation of the identified rules and principles into a set of parameters to measure the AI risk and provide a synthetic set of requirements to create a semi-automated risk assessment model.

A case study of the FRIA project: supporting human evaluation of an AI-based hiring system

Trasarti R.;Savella R.;Pratesi F.
2025

Abstract

The paper presents a case study of the FRIA project, aimed at researching and specifying a methodology to assess the impact of Artificial Intelligence (AI) systems on fundamental rights. In this paper we present a case study on an AI-based hiring system to test the methodology and define the interactions with the final users. The research output is a prototype tool to support and automate the fun-damental rights impact assessment of high-risk AI systems, which aims to comply with the requirements of the European Artificial Intelligence Act. The research methodology is interdisciplinary and based on a collaboration between legal pro-fessionals and computer scientists in the framework of the SoBigData Research Infrastructure (www.sobigdata.eu). It starts from the study of the existing legal and ethical frameworks concerning AI and human rights at the International and European levels and the translation of the identified rules and principles into a set of parameters to measure the AI risk and provide a synthetic set of requirements to create a semi-automated risk assessment model.
2025
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI
978-3-031-97780-0
978-3-031-97781-7
AI Act, High-risk AI systems, Impact assessment, FRIA, Metrics methodology, ELS, Risk management, Law and Ethics
File in questo prodotto:
File Dimensione Formato  
Gatt, Caggiano, Gaeta, Troisi, Lo Conte, Trasarti, Savella, Di Cristo, Pratesi_A case study of the FRIA project - CAMERA READY.pdf

embargo fino al 01/10/2026

Descrizione: A Case Study of the FRIA Project: Supporting Human Evaluation of an AI-Based Hiring System
Tipologia: Documento in Post-print
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 771.48 kB
Formato Adobe PDF
771.48 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Trasarti-Savella-Pratesi_LNCS 2025.pdf

solo utenti autorizzati

Descrizione: A Case Study of the FRIA Project: Supporting Human Evaluation of an AI-Based Hiring System
Tipologia: Versione Editoriale (PDF)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 2.81 MB
Formato Adobe PDF
2.81 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/554941
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact