The governance of artificial intelligence (AI) in education requires theoretical grounding that extends beyond system compliance toward outcome-focused accountability. The EU AI Act classifies AI-based learning outcome assessment (AIB-LOA) as a high-risk application (Annex III, point 3b), underscoring the importance of algorithmic decision-making in student evaluation. Current regulatory frameworks such as GDPR and ALTAI focus primarily on ex-ante and system-focused approaches. ALTAI applications in education concentrate on compliance and vulnerability analysis while often failing to integrate governance principles with established educational evaluation practices. While explainable AI research demonstrates methodological sophistication (e.g., LIME, SHAP), it often fails to deliver pedagogically meaningful transparency. This study develops the XAI-ED Consequential Assessment Framework (XAI-ED CAF) as a sector-specific, outcome-focused governance model for AIB-LOA. The framework reinterprets ALTAI’s seven requirements (human agency, robustness, privacy, transparency, fairness, societal well-being, and accountability) through three evaluation theories: Messick’s consequential validity, Kirkpatrick’s four-level model, and Stufflebeam’s CIPP framework. Through this theoretical integration, the study identifies indicators and potential evidence types for institutional self-assessment. The analysis indicates that trustworthy AI in education extends beyond technical transparency or legal compliance. Governance must address student autonomy, pedagogical validity, interpretability, fairness, institutional culture, and accountability. The XAI-ED CAF reconfigures ALTAI as a pedagogically grounded accountability model, establishing structured evaluative criteria that align with both regulatory and educational standards. The framework contributes to AI governance in education by connecting regulatory obligations with pedagogical evaluation theory. It supports policymakers, institutions, and researchers in developing outcome-focused self-assessment practices. Future research should test and refine the framework through Delphi studies and institutional applications across various contexts.

Theoretical Foundations for Governing AI-Based Learning Outcome Assessment in High-Risk Educational Contexts

Manganello F.
;
Boccuzzi G.
2025

Abstract

The governance of artificial intelligence (AI) in education requires theoretical grounding that extends beyond system compliance toward outcome-focused accountability. The EU AI Act classifies AI-based learning outcome assessment (AIB-LOA) as a high-risk application (Annex III, point 3b), underscoring the importance of algorithmic decision-making in student evaluation. Current regulatory frameworks such as GDPR and ALTAI focus primarily on ex-ante and system-focused approaches. ALTAI applications in education concentrate on compliance and vulnerability analysis while often failing to integrate governance principles with established educational evaluation practices. While explainable AI research demonstrates methodological sophistication (e.g., LIME, SHAP), it often fails to deliver pedagogically meaningful transparency. This study develops the XAI-ED Consequential Assessment Framework (XAI-ED CAF) as a sector-specific, outcome-focused governance model for AIB-LOA. The framework reinterprets ALTAI’s seven requirements (human agency, robustness, privacy, transparency, fairness, societal well-being, and accountability) through three evaluation theories: Messick’s consequential validity, Kirkpatrick’s four-level model, and Stufflebeam’s CIPP framework. Through this theoretical integration, the study identifies indicators and potential evidence types for institutional self-assessment. The analysis indicates that trustworthy AI in education extends beyond technical transparency or legal compliance. Governance must address student autonomy, pedagogical validity, interpretability, fairness, institutional culture, and accountability. The XAI-ED CAF reconfigures ALTAI as a pedagogically grounded accountability model, establishing structured evaluative criteria that align with both regulatory and educational standards. The framework contributes to AI governance in education by connecting regulatory obligations with pedagogical evaluation theory. It supports policymakers, institutions, and researchers in developing outcome-focused self-assessment practices. Future research should test and refine the framework through Delphi studies and institutional applications across various contexts.
2025
Istituto per le Tecnologie Didattiche - ITD - Sede Genova
accountability
AI governance
AI-based learning outcome assessment
ALTAI
artificial intelligence in education
educational evaluation
explainable AI (XAI)
pedagogical validity
transparency
File in questo prodotto:
File Dimensione Formato  
information-16-00814.pdf

accesso aperto

Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 240.98 kB
Formato Adobe PDF
240.98 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/555975
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact