In Human Robot cooperation scenarios, building a robot that can be defined a good collaborator, means endowing it with the capability to evaluate not only the physical environment, but especially the mental states and the features of its human interlocutor, in order to adapt its behavior every time she/he requires the robot’s help. The quality of this kind of evaluation, underlies the robot’s capability to operate a meta-evaluation of its own predictive skills to build a model of the in terlocutor and of her/his goals. The robot’s capability to self-trust his skills to interpret the interlocutor and the context, is a fundamental requirement for producing smart and effective decisions towards hu mans. In this work we propose a simulated experiment, designed with the goal to test a cognitive architecture for trustworthy human robot collaboration. The experiment has been designed in order to demon strate how the robot’s capability to learn its own level of self-trust on its predictive abilities in perceiving the user and building a model of her/him, allows it to establish a trustworthy collaboration and to maintain an high level of user’s satisfaction, with respect to the robot’s performance, also when these abilities progressively degrade.

Robot’s Self-Trust as Precondition for being a Good Collaborator

Cantucci F.
;
Falcone R.;Castelfranchi C
2021

Abstract

In Human Robot cooperation scenarios, building a robot that can be defined a good collaborator, means endowing it with the capability to evaluate not only the physical environment, but especially the mental states and the features of its human interlocutor, in order to adapt its behavior every time she/he requires the robot’s help. The quality of this kind of evaluation, underlies the robot’s capability to operate a meta-evaluation of its own predictive skills to build a model of the in terlocutor and of her/his goals. The robot’s capability to self-trust his skills to interpret the interlocutor and the context, is a fundamental requirement for producing smart and effective decisions towards hu mans. In this work we propose a simulated experiment, designed with the goal to test a cognitive architecture for trustworthy human robot collaboration. The experiment has been designed in order to demon strate how the robot’s capability to learn its own level of self-trust on its predictive abilities in perceiving the user and building a model of her/him, allows it to establish a trustworthy collaboration and to maintain an high level of user’s satisfaction, with respect to the robot’s performance, also when these abilities progressively degrade.
2021
Istituto di Scienze e Tecnologie della Cognizione - ISTC
Human-Robot Interaction, Trust
File in questo prodotto:
File Dimensione Formato  
Robots_self_trust_as_precondition_for_be.pdf

accesso aperto

Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 271.87 kB
Formato Adobe PDF
271.87 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/522111
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact