Open-ended learning is a core research field of developmental robotics and AI aiming to build learning machines and robots that can autonomously acquire knowledge and skills incrementally as infants. The first contribution of this work is to highlight the challenges posed by the previously proposed benchmark 'REAL competition' fostering the development of truly open-ended learning robots. The benchmark involves a simulated camera-arm robot that: 1) in a first 'intrinsic phase' acquires sensorimotor competence by autonomously interacting with objects and 2) in a second 'extrinsic phase' is tested with tasks, unknown in the intrinsic phase, to measure the quality of previously acquired knowledge. The benchmark requires the solution of multiple challenges usually tackled in isolation, in particular exploration, sparse-rewards, object learning, generalization, task/goal self-generation, and autonomous skill learning. As a second contribution, the work presents a 'REAL-X' architecture. Different systems implementing the architecture can solve different versions of the benchmark progressively releasing initial simplifications. The REAL-X systems are based on a planning approach that dynamically increases abstraction and on intrinsic motivations to foster exploration. Some systems achieves a good performance level in very demanding conditions. Overall, the REAL benchmark is shown to represent a valuable tool for studying open-ended learning in its hardest form.

REAL-X—Robot Open-Ended Autonomous Learning Architecture: Building Truly End-to-End Sensorimotor Autonomous Learning Systems

Emilio Cartoni
Primo
;
Davide Montella
Secondo
;
Gianluca Baldassarre
Ultimo
2023

Abstract

Open-ended learning is a core research field of developmental robotics and AI aiming to build learning machines and robots that can autonomously acquire knowledge and skills incrementally as infants. The first contribution of this work is to highlight the challenges posed by the previously proposed benchmark 'REAL competition' fostering the development of truly open-ended learning robots. The benchmark involves a simulated camera-arm robot that: 1) in a first 'intrinsic phase' acquires sensorimotor competence by autonomously interacting with objects and 2) in a second 'extrinsic phase' is tested with tasks, unknown in the intrinsic phase, to measure the quality of previously acquired knowledge. The benchmark requires the solution of multiple challenges usually tackled in isolation, in particular exploration, sparse-rewards, object learning, generalization, task/goal self-generation, and autonomous skill learning. As a second contribution, the work presents a 'REAL-X' architecture. Different systems implementing the architecture can solve different versions of the benchmark progressively releasing initial simplifications. The REAL-X systems are based on a planning approach that dynamically increases abstraction and on intrinsic motivations to foster exploration. Some systems achieves a good performance level in very demanding conditions. Overall, the REAL benchmark is shown to represent a valuable tool for studying open-ended learning in its hardest form.
2023
Istituto di Scienze e Tecnologie della Cognizione - ISTC
Autonomous robot
benchmark
competition
intrinsic motivation
open-ended learning
planning
simulation
File in questo prodotto:
File Dimensione Formato  
CartoniTrieschBaldassarre2023REALXRobotOpenEndedAutonomousLearningArchitecturesAchievingTrulyEndtoEndSensorimotorAutonomousLearningSystems.pdf

accesso aperto

Descrizione: E. Cartoni, D. Montella, J. Triesch and G. Baldassarre, "REAL-X—Robot Open-Ended Autonomous Learning Architecture: Building Truly End-to-End Sensorimotor Autonomous Learning Systems," in IEEE Transactions on Cognitive and Developmental Systems, vol. 15, no. 4, pp. 2014-2030, Dec. 2023, doi: 10.1109/TCDS.2023.3270081
Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 4.82 MB
Formato Adobe PDF
4.82 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/539587
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact