High rejection rates in upper-limb prosthetics stem from limited usability and excessive cognitive workload (CW), as traditional electromyographic (EMG) control strategies suffer from signal variability and require continuous user effort. Vision-based semiautonomous control strategies (SCS) are proposed to combine user input with contextual information. However, these approaches often provide limited user involvement in the loop and are not broadly validated in prosthetic applications. Their performance in usability and subjective perception remains unexplored. To address these limitations, an innovative SCS is proposed that integrates EMG signals with visual perception to enhance prosthetic performance, usability, and user experience, reducing CW. The proposed SCS allows users to intentionally select a grasp via EMG signals, while computer vision refines wrist orientation and adapts the grasp to the detected object, even in complex scenarios involving multiple objects or conflicting inputs between EMG and visual context. To assess performance, the proposed SCS is compared with the traditional EMG-based control strategy. 10 able-bodied subjects participate in comparative analysis. Results show that SCS achieves a 100% success rate (97.69% for the EMG-based strategy) while reducing task completion time by up to 52.66%. Usability increases (15-point improvement in the system usability scale score), and CW decreases, evidenced by physiological and subjective measures.
Context-Aware Semiautonomous Control for Upper-Limb Prostheses
Tamantini C.
;
2026
Abstract
High rejection rates in upper-limb prosthetics stem from limited usability and excessive cognitive workload (CW), as traditional electromyographic (EMG) control strategies suffer from signal variability and require continuous user effort. Vision-based semiautonomous control strategies (SCS) are proposed to combine user input with contextual information. However, these approaches often provide limited user involvement in the loop and are not broadly validated in prosthetic applications. Their performance in usability and subjective perception remains unexplored. To address these limitations, an innovative SCS is proposed that integrates EMG signals with visual perception to enhance prosthetic performance, usability, and user experience, reducing CW. The proposed SCS allows users to intentionally select a grasp via EMG signals, while computer vision refines wrist orientation and adapts the grasp to the detected object, even in complex scenarios involving multiple objects or conflicting inputs between EMG and visual context. To assess performance, the proposed SCS is compared with the traditional EMG-based control strategy. 10 able-bodied subjects participate in comparative analysis. Results show that SCS achieves a 100% success rate (97.69% for the EMG-based strategy) while reducing task completion time by up to 52.66%. Usability increases (15-point improvement in the system usability scale score), and CW decreases, evidenced by physiological and subjective measures.| File | Dimensione | Formato | |
|---|---|---|---|
|
2026_AISY.pdf
accesso aperto
Descrizione: Cirelli Gianmarco, Stefanelli Enrica, Billardello Roberto, Tamantini Christian, Leonelli Silvia, Zollo Loredana, Cordella Luigi Pietro, Cordella Francesca. Adv. Intell. Syst.. 2025; 000:e202501043. https://doi.org/10.1002/aisy.202501043
Tipologia:
Versione Editoriale (PDF)
Licenza:
Creative commons
Dimensione
1.66 MB
Formato
Adobe PDF
|
1.66 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


