Effective deployment of social assistive robots relies on their ability to deliver personalized support tailored to individual users’ needs and expectations. Personalization depends on multiple factors, including the nature of the required assistance, the user’s interaction preferences, and their cognitive, emotional, and mental states during engagement that represent core personalization factors. Cognitive architectures offer a means to adapt interactions in real time based on user states and environmental conditions. However, verbal interactions represent a crucial factor in human-robot interaction. When including this multi-modal interaction with a robot, relying solely on manually defined inference rules to determine appropriate robot behaviors for every possible scenario and user profile is impractical, particularly when the assistance involves generating context-specific verbal suggestions. At the same time, relying only on machine-learning-based conversational systems introduces indeterminacy that may result risky in assistive robotics applications. In this work, we introduce a framework that combines reactive reasoning with a conversational agent to enable personalized robotic support during cognitive training tasks. We demonstrate how this hybrid approach facilitates the dynamic adaptation of both dialogue and training content in response to observed user behaviour.
Interactive Robotic-Assisted Cognitive Training for Run-Time Personalization: a Preliminary Study
De Benedictis, Riccardo;Di Napoli, Claudia;Cortellessa, Gabriella;Fracasso, Francesca;
2026
Abstract
Effective deployment of social assistive robots relies on their ability to deliver personalized support tailored to individual users’ needs and expectations. Personalization depends on multiple factors, including the nature of the required assistance, the user’s interaction preferences, and their cognitive, emotional, and mental states during engagement that represent core personalization factors. Cognitive architectures offer a means to adapt interactions in real time based on user states and environmental conditions. However, verbal interactions represent a crucial factor in human-robot interaction. When including this multi-modal interaction with a robot, relying solely on manually defined inference rules to determine appropriate robot behaviors for every possible scenario and user profile is impractical, particularly when the assistance involves generating context-specific verbal suggestions. At the same time, relying only on machine-learning-based conversational systems introduces indeterminacy that may result risky in assistive robotics applications. In this work, we introduce a framework that combines reactive reasoning with a conversational agent to enable personalized robotic support during cognitive training tasks. We demonstrate how this hybrid approach facilitates the dynamic adaptation of both dialogue and training content in response to observed user behaviour.| File | Dimensione | Formato | |
|---|---|---|---|
|
978-981-95-2382-5_34.pdf
solo utenti autorizzati
Descrizione: ICSR26-paper
Tipologia:
Versione Editoriale (PDF)
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
1.69 MB
Formato
Adobe PDF
|
1.69 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


