Telepresence robots can support people with special needs (e.g., who cannot move) to remotely interact with people and the environment at a distance. In this application, people can communicate with the robot via alternative channels of communication, such as brain-machine interfaces, that are less accurate than the traditional mediums and allow the user to send limited classes of commands to the robot. To overcome these limitations, shared intelligence approaches are born to fuse the user’s inputs with a sort of intelligence on the robot, that interprets the user’s commands with respect to the environment and provides the robot deliberative ability in choosing the next action to implement. In this paper, we investigate how a shared intelligence system could be affected by the kind of inaccurate user’s input interfaces. For this purpose, we compare a brain-machine interface vs. a more reactive keyboard endowed with the same percentage of noise. Overall, the results revealed comparable navigation performance in the two conditions except for the accuracy (e.g., number of target positions reached), indicating similar assistance derived by the system. However, differences between the two modalities are found by correlating the performance of the system to the navigation situation, suggesting a different user’s inclination (more in control vs. robot’s autonomy) with respect to the interface’s responsiveness and the target to reach, and encouraging to adapt the shared intelligence system according to the real-time user’s ability and the surrounding environment.
Validation of Shared Intelligence Approach for Teleoperating Telepresence Robots Through Inaccurate Interfaces
Beraldo G.
Primo
;Cesta A.;
2023
Abstract
Telepresence robots can support people with special needs (e.g., who cannot move) to remotely interact with people and the environment at a distance. In this application, people can communicate with the robot via alternative channels of communication, such as brain-machine interfaces, that are less accurate than the traditional mediums and allow the user to send limited classes of commands to the robot. To overcome these limitations, shared intelligence approaches are born to fuse the user’s inputs with a sort of intelligence on the robot, that interprets the user’s commands with respect to the environment and provides the robot deliberative ability in choosing the next action to implement. In this paper, we investigate how a shared intelligence system could be affected by the kind of inaccurate user’s input interfaces. For this purpose, we compare a brain-machine interface vs. a more reactive keyboard endowed with the same percentage of noise. Overall, the results revealed comparable navigation performance in the two conditions except for the accuracy (e.g., number of target positions reached), indicating similar assistance derived by the system. However, differences between the two modalities are found by correlating the performance of the system to the navigation situation, suggesting a different user’s inclination (more in control vs. robot’s autonomy) with respect to the interface’s responsiveness and the target to reach, and encouraging to adapt the shared intelligence system according to the real-time user’s ability and the surrounding environment.File | Dimensione | Formato | |
---|---|---|---|
IAS2022.pdf
solo utenti autorizzati
Descrizione: Beraldo, G., Tonin, L., Cesta, A., Menegatti, E., Millán, J.d.R. (2023). Validation of Shared Intelligence Approach for Teleoperating Telepresence Robots Through Inaccurate Interfaces. In: Petrovic, I., Menegatti, E., Marković, I. (eds) Intelligent Autonomous Systems 17. IAS 2022. Lecture Notes in Networks and Systems, vol 577. Springer, Cham. https://doi.org/10.1007/978-3-031-22216-0_6
Tipologia:
Documento in Post-print
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
2.97 MB
Formato
Adobe PDF
|
2.97 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.