Trustworthy Artificial Intelligence (TAI) systems have become a priority for the European Union and have increased worldwide importance. The European Commission has consulted a High-Level Expert Group that has delivered a document on Ethics Guidelines for Trustworthy AI to promote Trustworthy AI principles. TAI has three overarching components, which should be met throughout the system’s entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations, (2) it should be ethical, ensuring adherence to ethical principles and values, and (3) it should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm. Each component in itself is necessary but not sufficient for the achievement of TAI. Ideally, all three elements work in harmony and overlap in their operation. If, in practice, tensions arise between these components, society should endeavor to align them. From a practical standpoint, these foundational principles manifest into various TAI dimensions, encompassing robustness, reproducibility, safety, transparency, explainability, diversity, non-discrimination, fairness, auditing, independent oversight, privacy, data governance, sustainability, and accountability. This special issue was conceived to solicit surveys addressing at least one dimension of TAI, providing a comprehensive and reasoned overview of the current state of the art. Emphasis was placed on reviewing and comparing methodologies addressing specific trustworthiness dimensions or exploring the intricate interplay and tensions between different dimensions.

Introduction to special issue on trustworthy artificial intelligence (Part II)

Giannotti F.;Pratesi F.
2025

Abstract

Trustworthy Artificial Intelligence (TAI) systems have become a priority for the European Union and have increased worldwide importance. The European Commission has consulted a High-Level Expert Group that has delivered a document on Ethics Guidelines for Trustworthy AI to promote Trustworthy AI principles. TAI has three overarching components, which should be met throughout the system’s entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations, (2) it should be ethical, ensuring adherence to ethical principles and values, and (3) it should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm. Each component in itself is necessary but not sufficient for the achievement of TAI. Ideally, all three elements work in harmony and overlap in their operation. If, in practice, tensions arise between these components, society should endeavor to align them. From a practical standpoint, these foundational principles manifest into various TAI dimensions, encompassing robustness, reproducibility, safety, transparency, explainability, diversity, non-discrimination, fairness, auditing, independent oversight, privacy, data governance, sustainability, and accountability. This special issue was conceived to solicit surveys addressing at least one dimension of TAI, providing a comprehensive and reasoned overview of the current state of the art. Emphasis was placed on reviewing and comparing methodologies addressing specific trustworthiness dimensions or exploring the intricate interplay and tensions between different dimensions.
2025
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI
Trustworthy Artificial Intelligence; TAI; Foundational principles
File in questo prodotto:
File Dimensione Formato  
Giannotti_Introduction_Part2_2025.pdf

accesso aperto

Descrizione: Introduction to Special Issue on Trustworthy Artificial Intelligence (Part II)
Tipologia: Versione Editoriale (PDF)
Licenza: Altro tipo di licenza
Dimensione 98.63 kB
Formato Adobe PDF
98.63 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/560005
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 1
social impact