The physical and cognitive behavior of humans working in manufacturing environments is an issue that is becoming increasingly important with the advent of Industry 5.0. In this context, the need to monitor operators performing specific tasks is fundamental, particularly when such operators work alongside robots. In fact, guaranteeing the well-being of human workers in industrial scenarios may consistently help in reducing risky and harmful situations. In this work, the HA4M dataset is used to assess human action recognition in a manufacturing environment, where operators perform assembly actions. More specifically, the study has been focused on training a deep learning architecture, namely the MS-TCN++, on RGB data extracted from a specific user, aiming to allow the action recognition model to properly adjust at the selected subject. The RGB data are elaborated using the Inflated 3D model, and the upcoming features have been sorted considering matrix-wise and array-wise dimensions. Furthermore, a 10-stage iterative-boosting technique has been developed, in which the model is iteratively trained by focusing on misclassified samples. It has been proved that the iterative methods allow a faster and more reliable training of the network, reaching an Accuracy, Precision, Recall, and F-score of 70.39\%, 74.24\%, 68.70\%, and 65.73\%, respectively, when training using array-wise features. Such results show the effectiveness of the proposed system, laying the foundation for further studies for detecting the operators' actions in the challenging context of Human-Robot Collaboration.

Deep Learning Methods with Iterative-Boosting for performing Human Action Recognition in Manufacturing Scenarios

Laura Romeo
;
Cosimo Patruno;Grazia Cicirelli;Tiziana D'Orazio
In corso di stampa

Abstract

The physical and cognitive behavior of humans working in manufacturing environments is an issue that is becoming increasingly important with the advent of Industry 5.0. In this context, the need to monitor operators performing specific tasks is fundamental, particularly when such operators work alongside robots. In fact, guaranteeing the well-being of human workers in industrial scenarios may consistently help in reducing risky and harmful situations. In this work, the HA4M dataset is used to assess human action recognition in a manufacturing environment, where operators perform assembly actions. More specifically, the study has been focused on training a deep learning architecture, namely the MS-TCN++, on RGB data extracted from a specific user, aiming to allow the action recognition model to properly adjust at the selected subject. The RGB data are elaborated using the Inflated 3D model, and the upcoming features have been sorted considering matrix-wise and array-wise dimensions. Furthermore, a 10-stage iterative-boosting technique has been developed, in which the model is iteratively trained by focusing on misclassified samples. It has been proved that the iterative methods allow a faster and more reliable training of the network, reaching an Accuracy, Precision, Recall, and F-score of 70.39\%, 74.24\%, 68.70\%, and 65.73\%, respectively, when training using array-wise features. Such results show the effectiveness of the proposed system, laying the foundation for further studies for detecting the operators' actions in the challenging context of Human-Robot Collaboration.
In corso di stampa
Istituto di Sistemi e Tecnologie Industriali Intelligenti per il Manifatturiero Avanzato - STIIMA (ex ITIA) Sede Secondaria Bari
action recognition, rgb, iterative boosting
File in questo prodotto:
File Dimensione Formato  
2025 - Deep Learning Methods with Iterative-Boosting for performing Human Action Recognition in Manufacturing Scenarios CODIT25.pdf

solo utenti autorizzati

Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 1.92 MB
Formato Adobe PDF
1.92 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/558803
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact