More and more often, Human Robot Interaction (HRI) applications require the design of robotics systems whose decision process implies the capability to evaluate not only the physical environment, but especially the mental states and the features of its human interlocutor, in order to adapt their social autonomy every time humans require the robot’s help. Robots will be really cooperative and effective when they will expose the capability to consider not only the goals or interests explicitly required by humans, but also those one that are not declared and to provide help that go beyond the literal task execution. In order to improve the quality of this kind of smart help, a robot has to operate a meta-evaluation of its own predictive skills to build a model of the interlocutor and of her/his goals. The robot’s capability to self-trust its skills to interpret the interlocutor and the context, is a fundamental requirement for producing smart and effective decisions towards humans. In this work we propose a simulated experiment, designed with the goal to test a cognitive architecture for trustworthy human robot collaboration. The experiment has been designed in order to demonstrate how the robot’s capability to learn its own level of self-trust on its predictive abilities in perceiving the user and building a model of her/him, allows it to establish a trustworthy collaboration and to maintain an high level of user’s satisfaction, with respect to the robot’s performance, also when these abilities progressively degrade.

Investigating Adjustable Social Autonomy in Human Robot Interaction

Cantucci F.
;
Falcone R;Castelfranchi C.
2021

Abstract

More and more often, Human Robot Interaction (HRI) applications require the design of robotics systems whose decision process implies the capability to evaluate not only the physical environment, but especially the mental states and the features of its human interlocutor, in order to adapt their social autonomy every time humans require the robot’s help. Robots will be really cooperative and effective when they will expose the capability to consider not only the goals or interests explicitly required by humans, but also those one that are not declared and to provide help that go beyond the literal task execution. In order to improve the quality of this kind of smart help, a robot has to operate a meta-evaluation of its own predictive skills to build a model of the interlocutor and of her/his goals. The robot’s capability to self-trust its skills to interpret the interlocutor and the context, is a fundamental requirement for producing smart and effective decisions towards humans. In this work we propose a simulated experiment, designed with the goal to test a cognitive architecture for trustworthy human robot collaboration. The experiment has been designed in order to demonstrate how the robot’s capability to learn its own level of self-trust on its predictive abilities in perceiving the user and building a model of her/him, allows it to establish a trustworthy collaboration and to maintain an high level of user’s satisfaction, with respect to the robot’s performance, also when these abilities progressively degrade.
2021
Istituto di Scienze e Tecnologie della Cognizione - ISTC
Trustworthy HRI, Robot Autonomy Adaptation, Theory of Mind, Transparency, Cognitive Modelling
File in questo prodotto:
File Dimensione Formato  
Investigating Adjustable Social Autonomy in Human.pdf

accesso aperto

Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 812.36 kB
Formato Adobe PDF
812.36 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/522721
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact