Artificial Intelligence (AI) has brought a revolution in many areas, including the education sector. It has the potential to improve learning practices, innovate teaching, and accelerate the path towards personalized learning. This work introduces Reinforcement Learning (RL) methods to develop a personalized examination scheduling system at a university level. We use two widely established RL algorithms, Q-Learning and Proximal Policy Optimization (PPO), for the task of personalized exam scheduling. We consider several key points, including learning efficiency, the quality of the personalized educational path, adaptability to changes in student performance, scalability with increasing numbers of students and courses, and implementation complexity. Experimental results, based on case studies conducted within a single degree program at a university, demonstrate that, while Q- Learning offers simplicity and greater interpretability, PPO offers superior performance in handling the complex and stochastic nature of students’ learning trajectories. Experimental results, conducted on a dataset of 391 students and 5700 exam records from a single degree program, demonstrate that PPO achieved a 42.0% success rate in improving student scheduling compared to Q-Learning’s 26.3%, with particularly strong performance on problematic students (41.3% vs 18.0% improvement rate). The average delay reduction was 5.5 months per student with PPO versus 3.0 months with Q-Learning, highlighting the critical role of algorithmic design in shaping educational outcomes. This work contributes to the growing field of AI-based instructional support systems and offers practical guidance for the implementation of intelligent tutoring systems.

AI-Based Intelligent System for Personalized Examination Scheduling

MUDDASAR NAEEM;MATTEO CIASCHI;ANTONIO CORONATO
2025

Abstract

Artificial Intelligence (AI) has brought a revolution in many areas, including the education sector. It has the potential to improve learning practices, innovate teaching, and accelerate the path towards personalized learning. This work introduces Reinforcement Learning (RL) methods to develop a personalized examination scheduling system at a university level. We use two widely established RL algorithms, Q-Learning and Proximal Policy Optimization (PPO), for the task of personalized exam scheduling. We consider several key points, including learning efficiency, the quality of the personalized educational path, adaptability to changes in student performance, scalability with increasing numbers of students and courses, and implementation complexity. Experimental results, based on case studies conducted within a single degree program at a university, demonstrate that, while Q- Learning offers simplicity and greater interpretability, PPO offers superior performance in handling the complex and stochastic nature of students’ learning trajectories. Experimental results, conducted on a dataset of 391 students and 5700 exam records from a single degree program, demonstrate that PPO achieved a 42.0% success rate in improving student scheduling compared to Q-Learning’s 26.3%, with particularly strong performance on problematic students (41.3% vs 18.0% improvement rate). The average delay reduction was 5.5 months per student with PPO versus 3.0 months with Q-Learning, highlighting the critical role of algorithmic design in shaping educational outcomes. This work contributes to the growing field of AI-based instructional support systems and offers practical guidance for the implementation of intelligent tutoring systems.
2025
Area di Ricerca BOLOGNA
Artificial Intelligence; Reinforcement Learning; Q-learning; Proximal Policy Optimization (PPO); Personalized Education; Examination Schedule
File in questo prodotto:
File Dimensione Formato  
technologies-13-00518.pdf

non disponibili

Tipologia: Versione Editoriale (PDF)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 353.81 kB
Formato Adobe PDF
353.81 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/560161
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact