The governance of artificial intelligence (AI) in education requires theoretical grounding that extends beyond system compliance toward outcome-focused accountability. The EU AI Act classifies AI-based learning outcome assessment (AIB-LOA) as a high-risk application (Annex III, point 3b), underscoring the importance of algorithmic decision-making in student evaluation. Current regulatory frameworks such as GDPR and ALTAI focus primarily on ex-ante and system-focused approaches. ALTAI applications in education concentrate on compliance and vulnerability analysis while often failing to integrate governance principles with established educational evaluation practices. While explainable AI research demonstrates methodological sophistication (e.g., LIME, SHAP), it often fails to deliver pedagogically meaningful transparency. This study develops the XAI-ED Consequential Assessment Framework (XAI-ED CAF) as a sector-specific, outcome-focused governance model for AIB-LOA. The framework reinterprets ALTAI’s seven requirements (human agency, robustness, privacy, transparency, fairness, societal well-being, and accountability) through three evaluation theories: Messick’s consequential validity, Kirkpatrick’s four-level model, and Stufflebeam’s CIPP framework. Through this theoretical integration, the study identifies indicators and potential evidence types for institutional self-assessment. The analysis indicates that trustworthy AI in education extends beyond technical transparency or legal compliance. Governance must address student autonomy, pedagogical validity, interpretability, fairness, institutional culture, and accountability. The XAI-ED CAF reconfigures ALTAI as a pedagogically grounded accountability model, establishing structured evaluative criteria that align with both regulatory and educational standards. The framework contributes to AI governance in education by connecting regulatory obligations with pedagogical evaluation theory. It supports policymakers, institutions, and researchers in developing outcome-focused self-assessment practices. Future research should test and refine the framework through Delphi studies and institutional applications across various contexts.
Theoretical Foundations for Governing AI-Based Learning Outcome Assessment in High-Risk Educational Contexts
Manganello F.
;Boccuzzi G.
2025
Abstract
The governance of artificial intelligence (AI) in education requires theoretical grounding that extends beyond system compliance toward outcome-focused accountability. The EU AI Act classifies AI-based learning outcome assessment (AIB-LOA) as a high-risk application (Annex III, point 3b), underscoring the importance of algorithmic decision-making in student evaluation. Current regulatory frameworks such as GDPR and ALTAI focus primarily on ex-ante and system-focused approaches. ALTAI applications in education concentrate on compliance and vulnerability analysis while often failing to integrate governance principles with established educational evaluation practices. While explainable AI research demonstrates methodological sophistication (e.g., LIME, SHAP), it often fails to deliver pedagogically meaningful transparency. This study develops the XAI-ED Consequential Assessment Framework (XAI-ED CAF) as a sector-specific, outcome-focused governance model for AIB-LOA. The framework reinterprets ALTAI’s seven requirements (human agency, robustness, privacy, transparency, fairness, societal well-being, and accountability) through three evaluation theories: Messick’s consequential validity, Kirkpatrick’s four-level model, and Stufflebeam’s CIPP framework. Through this theoretical integration, the study identifies indicators and potential evidence types for institutional self-assessment. The analysis indicates that trustworthy AI in education extends beyond technical transparency or legal compliance. Governance must address student autonomy, pedagogical validity, interpretability, fairness, institutional culture, and accountability. The XAI-ED CAF reconfigures ALTAI as a pedagogically grounded accountability model, establishing structured evaluative criteria that align with both regulatory and educational standards. The framework contributes to AI governance in education by connecting regulatory obligations with pedagogical evaluation theory. It supports policymakers, institutions, and researchers in developing outcome-focused self-assessment practices. Future research should test and refine the framework through Delphi studies and institutional applications across various contexts.| File | Dimensione | Formato | |
|---|---|---|---|
|
information-16-00814.pdf
accesso aperto
Tipologia:
Versione Editoriale (PDF)
Licenza:
Creative commons
Dimensione
240.98 kB
Formato
Adobe PDF
|
240.98 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


