The increasing use of artificial intelligence in educational contexts has introduced new legal and ethical challenges related to the transparency of automated assessments. Central to this discussion is the concept of "explainability," emerged as the right to understand the logical processes underpinning algorithmic decisions that directly impact students. This right shares significant analogies with the established legal principle of justifying educational assessments, part of the broader requirement for motivation in administrative decisions. Both rights inherently demand transparency, accountability, and clarity in the decision-making process, enabling students to contest or comprehend decisions affecting them. This paper examines the legal connection between the traditional right of students to receive explanations for their grades—as an expression of administrative transparency—and the emerging right to AI explainability in automated decision-making scenarios. It identifies points of convergence, such as the safeguarding of transparency, accountability, and due process. However, it also highlights notable divergences, primarily linked to the differing nature of the decision-maker: human educators, who exercise discretionary judgment informed by pedagogical experience, versus AI-based systems, which rely upon intricate, often opaque algorithmic logic and probabilistic methodologies inherently resistant to straightforward interpretability. Further, the paper critically explores the legal consequences of recognizing either substantial equivalence or fundamental difference between human and algorithmic evaluation processes. If substantial equivalence is acknowledged, existing legal guarantees—such as obligations of motivation, transparency, and accountability—can readily extend to AI-based decision-making without extensive normative reforms, reinforcing student protection and facilitating legal actions. Under this scenario, judicial review could effectively restore balance in cases of unjustified or unfair assessments. Conversely, recognizing a fundamental difference would necessitate tailored legislative interventions, distinct transparency standards specific to algorithmic decision-making, and a reallocation of legal responsibility towards algorithm developers and deploying institutions rather than individual educators. In this scenario, judicial oversight mechanisms would require innovative redesign, giving rise to novel forms of judicial review equipped to evaluate algorithmic complexity and adjudicate disputes involving AI-based reasoning. The paper highlights the critical need for legal scholarship to address these emerging complexities, maintaining coherence within educational and administrative law in response to evolving AI applications.

HARMONIZING HUMAN AND ALGORITHMIC ASSESSMENT: LEGAL REFLECTIONS ON THE RIGHT TO EXPLAINABILITY IN EDUCATION

Giannangelo Boccuzzi
;
Flavio Manganello
2025

Abstract

The increasing use of artificial intelligence in educational contexts has introduced new legal and ethical challenges related to the transparency of automated assessments. Central to this discussion is the concept of "explainability," emerged as the right to understand the logical processes underpinning algorithmic decisions that directly impact students. This right shares significant analogies with the established legal principle of justifying educational assessments, part of the broader requirement for motivation in administrative decisions. Both rights inherently demand transparency, accountability, and clarity in the decision-making process, enabling students to contest or comprehend decisions affecting them. This paper examines the legal connection between the traditional right of students to receive explanations for their grades—as an expression of administrative transparency—and the emerging right to AI explainability in automated decision-making scenarios. It identifies points of convergence, such as the safeguarding of transparency, accountability, and due process. However, it also highlights notable divergences, primarily linked to the differing nature of the decision-maker: human educators, who exercise discretionary judgment informed by pedagogical experience, versus AI-based systems, which rely upon intricate, often opaque algorithmic logic and probabilistic methodologies inherently resistant to straightforward interpretability. Further, the paper critically explores the legal consequences of recognizing either substantial equivalence or fundamental difference between human and algorithmic evaluation processes. If substantial equivalence is acknowledged, existing legal guarantees—such as obligations of motivation, transparency, and accountability—can readily extend to AI-based decision-making without extensive normative reforms, reinforcing student protection and facilitating legal actions. Under this scenario, judicial review could effectively restore balance in cases of unjustified or unfair assessments. Conversely, recognizing a fundamental difference would necessitate tailored legislative interventions, distinct transparency standards specific to algorithmic decision-making, and a reallocation of legal responsibility towards algorithm developers and deploying institutions rather than individual educators. In this scenario, judicial oversight mechanisms would require innovative redesign, giving rise to novel forms of judicial review equipped to evaluate algorithmic complexity and adjudicate disputes involving AI-based reasoning. The paper highlights the critical need for legal scholarship to address these emerging complexities, maintaining coherence within educational and administrative law in response to evolving AI applications.
2025
Istituto per le Tecnologie Didattiche - ITD - Sede Genova
978-84-09-74218-9
AI, Education, Law, Explainability, Rights, Assessment
File in questo prodotto:
File Dimensione Formato  
EDULEARN25_2446.docx

solo utenti autorizzati

Tipologia: Versione Editoriale (PDF)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 33.66 kB
Formato Microsoft Word XML
33.66 kB Microsoft Word XML   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/558546
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact