Human-robot manipulation of soft materials, such as fabrics, composites, and sheets of paper/cardboard, is a challenging operation that presents several relevant industrial applications. Estimating the deformation state of the manipulated material is one of the main challenges. Viable methods provide the indirect measure by calculating the human-robot relative distance. In this paper, we develop a data-driven model to estimate the deformation state of the material from a depth image through a Convolutional Neural Network (CNN). First, we define the deformation state of the material as the relative roto-translation from the current robot pose and a human grasping position. The model estimates the current deformation state through a Convolutional Neural Network, specifically, DenseNet-121 pretrained on ImageNet. The delta between the current and the desired deformation state is fed to the robot controller that outputs twist commands. The paper describes the developed approach to acquire, preprocess the dataset and train the model. The model is compared with the current state-of-the-art method based on a camera skeletal tracker. Results show that the approach achieves better performances and avoids the drawbacks of a skeletal tracker. The model was also validated over three different materials showing its generalization ability. Finally, we also studied the model performance according to different architectures and dataset dimensions to minimize the time required for dataset acquisition.

Co-manipulation of soft-materials estimating deformation from depth images

Nicola, Giorgio
Primo
Membro del Collaboration Group
;
Villagrossi, Enrico
Secondo
Membro del Collaboration Group
;
Pedrocchi, Nicola
Ultimo
Membro del Collaboration Group
2024

Abstract

Human-robot manipulation of soft materials, such as fabrics, composites, and sheets of paper/cardboard, is a challenging operation that presents several relevant industrial applications. Estimating the deformation state of the manipulated material is one of the main challenges. Viable methods provide the indirect measure by calculating the human-robot relative distance. In this paper, we develop a data-driven model to estimate the deformation state of the material from a depth image through a Convolutional Neural Network (CNN). First, we define the deformation state of the material as the relative roto-translation from the current robot pose and a human grasping position. The model estimates the current deformation state through a Convolutional Neural Network, specifically, DenseNet-121 pretrained on ImageNet. The delta between the current and the desired deformation state is fed to the robot controller that outputs twist commands. The paper describes the developed approach to acquire, preprocess the dataset and train the model. The model is compared with the current state-of-the-art method based on a camera skeletal tracker. Results show that the approach achieves better performances and avoids the drawbacks of a skeletal tracker. The model was also validated over three different materials showing its generalization ability. Finally, we also studied the model performance according to different architectures and dataset dimensions to minimize the time required for dataset acquisition.
2024
Istituto di Sistemi e Tecnologie Industriali Intelligenti per il Manifatturiero Avanzato - STIIMA (ex ITIA)
Human-robot collaborative transportation
Soft materials co-manipulation
Vision-based robot manual guidance
File in questo prodotto:
File Dimensione Formato  
prod_486174-doc_201676.pdf

solo utenti autorizzati

Descrizione: Co-manipulation of soft-materials estimating deformation from depth images
Tipologia: Versione Editoriale (PDF)
Licenza: NON PUBBLICO - Accesso privato/ristretto
Dimensione 4.95 MB
Formato Adobe PDF
4.95 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
prod_486174-doc_201682 (1)_compressed.pdf

embargo fino al 26/08/2025

Tipologia: Documento in Post-print
Licenza: Creative commons
Dimensione 574.52 kB
Formato Adobe PDF
574.52 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/461890
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? ND
social impact