The global adoption of artificial intelligence (AI) has increased dramatically in recent years, becoming commonplace in many fields. Such a pervasiveness has led to changes in how AI is perceived, strengthening discussions on its societal consequences. Thus, a new class of requirements for AI-based solutions emerged. Broadly speaking, those on “explainability” aim to provide a transparent representation of the (often opaque) reasoning method that an AI-based solution uses when prompted. This work presents a methodology for validating a class of explainable AI (XAI) models, called deterministic rule-based models, which are used for expressing an explainable approximation of classifiers based on machine learning. The validation methodology combines logical deduction with constraint-based reasoning in numerical domains, and it either succeeds or returns quantitative estimations of the invalid deviations found. This information allows us to assess the correctness of an XAI model, or in the case of deviations, to evaluate if it still can be deemed acceptable. The validation methodology has been applied to a simulation-based study where the decision-making process copes with the spread of SARS-COV-2 inside a railway station. The considered case study is a controlled but nontrivial example that shows the overall applicability of the methodology.

A Validation Methodology for XAI Decision Support Systems Against Relational Domain Properties

De Angelis, Emanuele;De Angelis, Guglielmo
;
Mongelli, Maurizio;Proietti, Maurizio
2025

Abstract

The global adoption of artificial intelligence (AI) has increased dramatically in recent years, becoming commonplace in many fields. Such a pervasiveness has led to changes in how AI is perceived, strengthening discussions on its societal consequences. Thus, a new class of requirements for AI-based solutions emerged. Broadly speaking, those on “explainability” aim to provide a transparent representation of the (often opaque) reasoning method that an AI-based solution uses when prompted. This work presents a methodology for validating a class of explainable AI (XAI) models, called deterministic rule-based models, which are used for expressing an explainable approximation of classifiers based on machine learning. The validation methodology combines logical deduction with constraint-based reasoning in numerical domains, and it either succeeds or returns quantitative estimations of the invalid deviations found. This information allows us to assess the correctness of an XAI model, or in the case of deviations, to evaluate if it still can be deemed acceptable. The validation methodology has been applied to a simulation-based study where the decision-making process copes with the spread of SARS-COV-2 inside a railway station. The considered case study is a controlled but nontrivial example that shows the overall applicability of the methodology.
2025
Istituto di Analisi dei Sistemi ed Informatica ''Antonio Ruberti'' - IASI
Istituto di Elettronica e di Ingegneria dell'Informazione e delle Telecomunicazioni - IEIIT
constraint logic programming
decision support systems
explainable artificial intelligence
rule-based classifier
validation
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/557841
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ente

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact