The underwater environment is a harmful environment, yet one of the richest and least exploited. For these reasons the idea of a robotic companion with the task of supporting and monitoring divers during their activities and operations has been proposed. However, the idea of a platoon of robots at the diver's disposal has never been fully addressed in these proposals due to the high cost of implementation and the usability, weight and bulk of the robots. Nevertheless, recent advancements in swarm robotics, materials engineering, deep learning, and the decreasing cost of autonomous underwater vehicles (AUVs), have rendered this concept increasingly viable. Therefore, this paper introduces, in the first part, a novel framework that integrates a revised version of a gesture-based language for underwater human-robot interaction (Caddian) based on insights gained from extensive field trials. The newly introduced objective of this framework is to enable the cooperation and coordination of an AUV team by one or more human operators, while allowing a human operator to delegate a robot leader to instruct the other robotic team members. The work, in the second part, provides an evaluation of the new language proposed thanks to a fifty million sentence corpus and describes a comparison framework, which is used to estimate it with respect to other existing underwater human-robot interaction languages.

Towards Multi-AUV Collaboration and Coordination: A Gesture-Based Multi-AUV Hierarchical Language and a Language Framework Comparison System

Davide Chiarella
2023

Abstract

The underwater environment is a harmful environment, yet one of the richest and least exploited. For these reasons the idea of a robotic companion with the task of supporting and monitoring divers during their activities and operations has been proposed. However, the idea of a platoon of robots at the diver's disposal has never been fully addressed in these proposals due to the high cost of implementation and the usability, weight and bulk of the robots. Nevertheless, recent advancements in swarm robotics, materials engineering, deep learning, and the decreasing cost of autonomous underwater vehicles (AUVs), have rendered this concept increasingly viable. Therefore, this paper introduces, in the first part, a novel framework that integrates a revised version of a gesture-based language for underwater human-robot interaction (Caddian) based on insights gained from extensive field trials. The newly introduced objective of this framework is to enable the cooperation and coordination of an AUV team by one or more human operators, while allowing a human operator to delegate a robot leader to instruct the other robotic team members. The work, in the second part, provides an evaluation of the new language proposed thanks to a fifty million sentence corpus and describes a comparison framework, which is used to estimate it with respect to other existing underwater human-robot interaction languages.
Campo DC Valore Lingua
dc.authority.ancejournal JOURNAL OF MARINE SCIENCE AND ENGINEERING en
dc.authority.orgunit Istituto di linguistica computazionale "Antonio Zampolli" - ILC en
dc.authority.people Davide Chiarella en
dc.collection.id.s b3f88f24-048a-4e43-8ab1-6697b90e068e *
dc.collection.name 01.01 Articolo in rivista *
dc.contributor.appartenenza Istituto di linguistica computazionale "Antonio Zampolli" - ILC *
dc.contributor.appartenenza.mi 918 *
dc.contributor.area Non assegn *
dc.date.accessioned 2024/02/20 12:35:02 -
dc.date.available 2024/02/20 12:35:02 -
dc.date.firstsubmission 2025/01/20 12:22:39 *
dc.date.issued 2023 -
dc.date.submission 2025/01/20 12:22:39 *
dc.description.abstracteng The underwater environment is a harmful environment, yet one of the richest and least exploited. For these reasons the idea of a robotic companion with the task of supporting and monitoring divers during their activities and operations has been proposed. However, the idea of a platoon of robots at the diver's disposal has never been fully addressed in these proposals due to the high cost of implementation and the usability, weight and bulk of the robots. Nevertheless, recent advancements in swarm robotics, materials engineering, deep learning, and the decreasing cost of autonomous underwater vehicles (AUVs), have rendered this concept increasingly viable. Therefore, this paper introduces, in the first part, a novel framework that integrates a revised version of a gesture-based language for underwater human-robot interaction (Caddian) based on insights gained from extensive field trials. The newly introduced objective of this framework is to enable the cooperation and coordination of an AUV team by one or more human operators, while allowing a human operator to delegate a robot leader to instruct the other robotic team members. The work, in the second part, provides an evaluation of the new language proposed thanks to a fifty million sentence corpus and describes a comparison framework, which is used to estimate it with respect to other existing underwater human-robot interaction languages. -
dc.description.affiliations CNR-ILC -
dc.description.allpeople Chiarella, Davide -
dc.description.allpeopleoriginal Davide Chiarella en
dc.description.fulltext open en
dc.description.numberofauthors 1 -
dc.identifier.doi 10.3390/jmse11061208 en
dc.identifier.isi WOS:001015508600001 -
dc.identifier.scopus 2-s2.0-85164156612 en
dc.identifier.uri https://hdl.handle.net/20.500.14243/459849 -
dc.identifier.url https://www.mdpi.com/2077-1312/11/6/1208 en
dc.language.iso eng en
dc.relation.issue 6 en
dc.relation.numberofpages 28 en
dc.relation.volume 11 en
dc.subject.keywordseng gesture-based language -
dc.subject.keywordseng underwater human-robot interaction -
dc.subject.keywordseng multi-AUV collaboration -
dc.subject.keywordseng language corpora and resources -
dc.subject.singlekeyword gesture-based language *
dc.subject.singlekeyword underwater human-robot interaction *
dc.subject.singlekeyword multi-AUV collaboration *
dc.subject.singlekeyword language corpora and resources *
dc.title Towards Multi-AUV Collaboration and Coordination: A Gesture-Based Multi-AUV Hierarchical Language and a Language Framework Comparison System en
dc.type.circulation Internazionale en
dc.type.driver info:eu-repo/semantics/article -
dc.type.full 01 Contributo su Rivista::01.01 Articolo in rivista it
dc.type.impactfactor si en
dc.type.miur 262 -
dc.type.referee Esperti anonimi en
dc.ugov.descaux1 485365 -
iris.isi.extIssued 2023 -
iris.isi.extTitle Towards Multi-AUV Collaboration and Coordination: A Gesture-Based Multi-AUV Hierarchical Language and a Language Framework Comparison System -
iris.isi.ideLinkStatusDate 2025/02/13 14:12:09 *
iris.isi.ideLinkStatusMillisecond 1739452329904 *
iris.isi.metadataErrorDescription 0 -
iris.isi.metadataErrorType ERROR_NO_MATCH -
iris.isi.metadataStatus ERROR -
iris.mediafilter.data 2025/04/11 03:30:43 *
iris.orcid.lastModifiedDate 2025/02/13 14:12:09 *
iris.orcid.lastModifiedMillisecond 1739452329890 *
iris.scopus.extIssued 2023 -
iris.scopus.extTitle Towards Multi-AUV Collaboration and Coordination: A Gesture-Based Multi-AUV Hierarchical Language and a Language Framework Comparison System -
iris.scopus.ideLinkStatusDate 2024/05/30 16:50:26 *
iris.scopus.ideLinkStatusMillisecond 1717080626089 *
iris.sitodocente.maxattempts 5 -
iris.unpaywall.bestoahost publisher *
iris.unpaywall.bestoaversion publishedVersion *
iris.unpaywall.doi 10.3390/jmse11061208 *
iris.unpaywall.hosttype publisher *
iris.unpaywall.isoa true *
iris.unpaywall.journalisindoaj true *
iris.unpaywall.landingpage https://doi.org/10.3390/jmse11061208 *
iris.unpaywall.license cc-by *
iris.unpaywall.metadataCallLastModified 14/02/2025 19:38:12 -
iris.unpaywall.metadataCallLastModifiedMillisecond 1739558292499 -
iris.unpaywall.oastatus gold *
iris.unpaywall.pdfurl https://www.mdpi.com/2077-1312/11/6/1208/pdf?version=1686548075 *
isi.category IL *
isi.category IO *
isi.category SI *
isi.contributor.affiliation Consiglio Nazionale delle Ricerche (CNR) -
isi.contributor.country Italy -
isi.contributor.name Davide -
isi.contributor.researcherId C-3459-2015 -
isi.contributor.subaffiliation Inst Computat Linguist -
isi.contributor.surname Chiarella -
isi.date.issued 2023 *
isi.description.allpeopleoriginal Chiarella, D; *
isi.document.type Article *
isi.identifier.doi 10.3390/jmse11061208 *
isi.identifier.eissn 2077-1312 *
isi.identifier.isi WOS:001015508600001 *
isi.journal.journaltitle JOURNAL OF MARINE SCIENCE AND ENGINEERING *
isi.journal.journaltitleabbrev J MAR SCI ENG *
isi.language.original English *
isi.publisher.place ST ALBAN-ANLAGE 66, CH-4052 BASEL, SWITZERLAND *
scopus.authority.ancejournal JOURNAL OF MARINE SCIENCE AND ENGINEERING###2077-1312 *
scopus.category 2205 *
scopus.category 2312 *
scopus.category 2212 *
scopus.contributor.affiliation Institute of Computational Linguistics—National Research Council -
scopus.contributor.afid 60021199 -
scopus.contributor.auid 25930765400 -
scopus.contributor.country Italy -
scopus.contributor.dptid -
scopus.contributor.name Davide -
scopus.contributor.subaffiliation -
scopus.contributor.surname Chiarella -
scopus.date.issued 2023 *
scopus.description.abstracteng The underwater environment is a harmful environment, yet one of the richest and least exploited. For these reasons the idea of a robotic companion with the task of supporting and monitoring divers during their activities and operations has been proposed. However, the idea of a platoon of robots at the diver’s disposal has never been fully addressed in these proposals due to the high cost of implementation and the usability, weight and bulk of the robots. Nevertheless, recent advancements in swarm robotics, materials engineering, deep learning, and the decreasing cost of autonomous underwater vehicles (AUVs), have rendered this concept increasingly viable. Therefore, this paper introduces, in the first part, a novel framework that integrates a revised version of a gesture-based language for underwater human–robot interaction (Caddian) based on insights gained from extensive field trials. The newly introduced objective of this framework is to enable the cooperation and coordination of an AUV team by one or more human operators, while allowing a human operator to delegate a robot leader to instruct the other robotic team members. The work, in the second part, provides an evaluation of the new language proposed thanks to a fifty million sentence corpus and describes a comparison framework, which is used to estimate it with respect to other existing underwater human–robot interaction languages. *
scopus.description.allpeopleoriginal Chiarella D. *
scopus.differences scopus.subject.keywords *
scopus.differences scopus.description.allpeopleoriginal *
scopus.differences scopus.description.abstracteng *
scopus.document.type ar *
scopus.document.types ar *
scopus.funding.funders 501100000780 - European Commission; 501100004963 - Seventh Framework Programme; 100011102 - Seventh Framework Programme; *
scopus.funding.ids 611373; *
scopus.identifier.doi 10.3390/jmse11061208 *
scopus.identifier.eissn 2077-1312 *
scopus.identifier.pui 2024173488 *
scopus.identifier.scopus 2-s2.0-85164156612 *
scopus.journal.sourceid 21100830140 *
scopus.language.iso eng *
scopus.publisher.name MDPI *
scopus.relation.article 1208 *
scopus.relation.issue 6 *
scopus.relation.volume 11 *
scopus.subject.keywords gesture-based language; language corpora and resources; multi-AUV collaboration; underwater human–robot interaction; *
scopus.title Towards Multi-AUV Collaboration and Coordination: A Gesture-Based Multi-AUV Hierarchical Language and a Language Framework Comparison System *
scopus.titleeng Towards Multi-AUV Collaboration and Coordination: A Gesture-Based Multi-AUV Hierarchical Language and a Language Framework Comparison System *
Appare nelle tipologie: 01.01 Articolo in rivista
File in questo prodotto:
File Dimensione Formato  
jmse-11-01208-no-cover.pdf

accesso aperto

Descrizione: Versione pubblicata
Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 5.22 MB
Formato Adobe PDF
5.22 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/459849
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 3
social impact