In this article, we present a novel underwater dataset collected from several field trials within the EU FP7 project "Cognitive autonomous diving buddy (CADDY)", where an Autonomous Underwater Vehicle (AUV) was used to interact with divers and monitor their activities. To our knowledge, this is one of the first efforts to collect a large public dataset in underwater environments with the purpose of studying and boosting object classification, segmentation and human pose estimation tasks. The first part of the dataset contains stereo camera recordings (?10 K) of divers performing hand gestures to communicate with an AUV in different environmental conditions. The gestures can be used to test the robustness of visual detection and classification algorithms in underwater conditions, e.g., under color attenuation and light backscatter. The second part includes stereo footage (?12.7 K) of divers free-swimming in front of the AUV, along with synchronized measurements from Inertial Measurement Units (IMU) located throughout the diver's suit (DiverNet), which serve as ground-truth for human pose and tracking methods. In both cases, these rectified images allow the investigation of 3D representation and reasoning pipelines from low-texture targets commonly present in underwater scenarios. This work describes the recording platform, sensor calibration procedure plus the data format and the software utilities provided to use the dataset.

CADDY Underwater Stereo-Vision Dataset for Human-Robot Interaction (HRI) in the Context of Diver Activities

Andrea Ranieri;Davide Chiarella;Enrica Zereik;
2019

Abstract

In this article, we present a novel underwater dataset collected from several field trials within the EU FP7 project "Cognitive autonomous diving buddy (CADDY)", where an Autonomous Underwater Vehicle (AUV) was used to interact with divers and monitor their activities. To our knowledge, this is one of the first efforts to collect a large public dataset in underwater environments with the purpose of studying and boosting object classification, segmentation and human pose estimation tasks. The first part of the dataset contains stereo camera recordings (?10 K) of divers performing hand gestures to communicate with an AUV in different environmental conditions. The gestures can be used to test the robustness of visual detection and classification algorithms in underwater conditions, e.g., under color attenuation and light backscatter. The second part includes stereo footage (?12.7 K) of divers free-swimming in front of the AUV, along with synchronized measurements from Inertial Measurement Units (IMU) located throughout the diver's suit (DiverNet), which serve as ground-truth for human pose and tracking methods. In both cases, these rectified images allow the investigation of 3D representation and reasoning pipelines from low-texture targets commonly present in underwater scenarios. This work describes the recording platform, sensor calibration procedure plus the data format and the software utilities provided to use the dataset.
Campo DC Valore Lingua
dc.authority.ancejournal JOURNAL OF MARINE SCIENCE AND ENGINEERING -
dc.authority.orgunit Istituto di linguistica computazionale "Antonio Zampolli" - ILC -
dc.authority.orgunit Istituto di iNgegneria del Mare - INM (ex INSEAN) -
dc.authority.people Arturo Gomez Chavez it
dc.authority.people Andrea Ranieri it
dc.authority.people Davide Chiarella it
dc.authority.people Enrica Zereik it
dc.authority.people Anja Babi it
dc.authority.people Andreas Birk it
dc.authority.project Cognitive autonomous diving buddy -
dc.collection.id.s b3f88f24-048a-4e43-8ab1-6697b90e068e *
dc.collection.name 01.01 Articolo in rivista *
dc.contributor.appartenenza Istituto di Matematica Applicata e Tecnologie Informatiche - IMATI - Sede Secondaria Genova *
dc.contributor.appartenenza Istituto di iNgegneria del Mare - INM (ex INSEAN) - Sede Secondaria Genova *
dc.contributor.appartenenza Istituto di linguistica computazionale "Antonio Zampolli" - ILC *
dc.contributor.appartenenza.mi 918 *
dc.contributor.appartenenza.mi 921 *
dc.contributor.appartenenza.mi 1061 *
dc.date.accessioned 2024/02/21 05:39:20 -
dc.date.available 2024/02/21 05:39:20 -
dc.date.issued 2019 -
dc.description.abstracteng In this article, we present a novel underwater dataset collected from several field trials within the EU FP7 project "Cognitive autonomous diving buddy (CADDY)", where an Autonomous Underwater Vehicle (AUV) was used to interact with divers and monitor their activities. To our knowledge, this is one of the first efforts to collect a large public dataset in underwater environments with the purpose of studying and boosting object classification, segmentation and human pose estimation tasks. The first part of the dataset contains stereo camera recordings (?10 K) of divers performing hand gestures to communicate with an AUV in different environmental conditions. The gestures can be used to test the robustness of visual detection and classification algorithms in underwater conditions, e.g., under color attenuation and light backscatter. The second part includes stereo footage (?12.7 K) of divers free-swimming in front of the AUV, along with synchronized measurements from Inertial Measurement Units (IMU) located throughout the diver's suit (DiverNet), which serve as ground-truth for human pose and tracking methods. In both cases, these rectified images allow the investigation of 3D representation and reasoning pipelines from low-texture targets commonly present in underwater scenarios. This work describes the recording platform, sensor calibration procedure plus the data format and the software utilities provided to use the dataset. -
dc.description.affiliations Jacobs University Bremen, CNR-INM, CNR-ILC, CNR-INM, University of Zagreb, Jacobs University Bremen -
dc.description.allpeople Gomez Chavez, Arturo; Ranieri, Andrea; Chiarella, Davide; Zereik, Enrica; Babi, Anja; Birk, Andreas -
dc.description.allpeopleoriginal Arturo Gomez Chavez, Andrea Ranieri, Davide Chiarella, Enrica Zereik, Anja Babi?, Andreas Birk -
dc.description.fulltext open en
dc.description.numberofauthors 6 -
dc.identifier.doi 10.3390/jmse7010016 -
dc.identifier.isi WOS:000459717300015 -
dc.identifier.scopus 2-s2.0-85060193254 -
dc.identifier.uri https://hdl.handle.net/20.500.14243/345428 -
dc.identifier.url https://www.mdpi.com/2077-1312/7/1/16 -
dc.language.iso eng -
dc.miur.last.status.update 2025-01-20T21:40:39Z *
dc.relation.firstpage 1 -
dc.relation.issue 1 -
dc.relation.lastpage 14 -
dc.relation.numberofpages 14 -
dc.relation.projectAcronym CADDY -
dc.relation.projectAwardNumber 611373 -
dc.relation.projectAwardTitle Cognitive autonomous diving buddy -
dc.relation.projectFunderName - en
dc.relation.projectFundingStream FP7 -
dc.relation.volume 7 -
dc.subject.keywords dataset -
dc.subject.keywords underwater imaging -
dc.subject.keywords image processing -
dc.subject.keywords marine robotics -
dc.subject.keywords field robotics -
dc.subject.keywords human-robot interaction -
dc.subject.keywords stereo vision -
dc.subject.keywords object classification -
dc.subject.keywords human pose estimation -
dc.subject.singlekeyword dataset *
dc.subject.singlekeyword underwater imaging *
dc.subject.singlekeyword image processing *
dc.subject.singlekeyword marine robotics *
dc.subject.singlekeyword field robotics *
dc.subject.singlekeyword human-robot interaction *
dc.subject.singlekeyword stereo vision *
dc.subject.singlekeyword object classification *
dc.subject.singlekeyword human pose estimation *
dc.title CADDY Underwater Stereo-Vision Dataset for Human-Robot Interaction (HRI) in the Context of Diver Activities en
dc.type.driver info:eu-repo/semantics/article -
dc.type.full 01 Contributo su Rivista::01.01 Articolo in rivista it
dc.type.miur 262 -
dc.type.referee Sì, ma tipo non specificato -
dc.ugov.descaux1 398298 -
iris.isi.ideLinkStatusDate 2025/02/13 14:11:19 *
iris.isi.ideLinkStatusMillisecond 1739452279877 *
iris.isi.metadataErrorDescription 0 -
iris.isi.metadataErrorType ERROR_NO_MATCH -
iris.isi.metadataStatus ERROR -
iris.mediafilter.data 2025/04/11 03:16:52 *
iris.orcid.lastModifiedDate 2025/02/13 14:11:19 *
iris.orcid.lastModifiedMillisecond 1739452279846 *
iris.scopus.extIssued 2019 -
iris.scopus.extTitle CADDY underwater Stereo-Vision dataset for human-robot interaction (HRI) in the context of diver activities -
iris.sitodocente.maxattempts 1 -
iris.unpaywall.bestoahost publisher *
iris.unpaywall.bestoaversion publishedVersion *
iris.unpaywall.doi 10.3390/jmse7010016 *
iris.unpaywall.hosttype publisher *
iris.unpaywall.isoa true *
iris.unpaywall.journalisindoaj true *
iris.unpaywall.landingpage https://doi.org/10.3390/jmse7010016 *
iris.unpaywall.license cc-by *
iris.unpaywall.metadataCallLastModified 14/02/2025 19:01:47 -
iris.unpaywall.metadataCallLastModifiedMillisecond 1739556107645 -
iris.unpaywall.oastatus gold *
iris.unpaywall.pdfurl https://www.mdpi.com/2077-1312/7/1/16/pdf?version=1547637387 *
scopus.authority.ancejournal JOURNAL OF MARINE SCIENCE AND ENGINEERING###2077-1312 *
scopus.category 2205 *
scopus.category 2312 *
scopus.category 2212 *
scopus.contributor.affiliation Jacobs University Bremen -
scopus.contributor.affiliation Institute of Marine Engineering-National Research Council -
scopus.contributor.affiliation Institute for Computational Linguistics-National Research Council -
scopus.contributor.affiliation Institute of Marine Engineering-National Research Council -
scopus.contributor.affiliation University of Zagreb -
scopus.contributor.affiliation Jacobs University Bremen -
scopus.contributor.afid 60016458 -
scopus.contributor.afid 60021199 -
scopus.contributor.afid 60021199 -
scopus.contributor.afid 60021199 -
scopus.contributor.afid 60159855 -
scopus.contributor.afid 60016458 -
scopus.contributor.auid 57104723100 -
scopus.contributor.auid 56512556800 -
scopus.contributor.auid 25930765400 -
scopus.contributor.auid 35106497900 -
scopus.contributor.auid 56715404200 -
scopus.contributor.auid 7005194050 -
scopus.contributor.country Germany -
scopus.contributor.country Italy -
scopus.contributor.country Italy -
scopus.contributor.country Italy -
scopus.contributor.country Croatia -
scopus.contributor.country Germany -
scopus.contributor.dptid 109048364 -
scopus.contributor.dptid -
scopus.contributor.dptid -
scopus.contributor.dptid -
scopus.contributor.dptid -
scopus.contributor.dptid 109048364 -
scopus.contributor.name Arturo Gomez -
scopus.contributor.name Andrea -
scopus.contributor.name Davide -
scopus.contributor.name Enrica -
scopus.contributor.name Anja -
scopus.contributor.name Andreas -
scopus.contributor.subaffiliation Robotics Group;Computer Science and Electrical Engineering; -
scopus.contributor.subaffiliation -
scopus.contributor.subaffiliation -
scopus.contributor.subaffiliation -
scopus.contributor.subaffiliation Faculty of Electrical Engineering and Computing; -
scopus.contributor.subaffiliation Robotics Group;Computer Science and Electrical Engineering; -
scopus.contributor.surname Chavez -
scopus.contributor.surname Ranieri -
scopus.contributor.surname Chiarella -
scopus.contributor.surname Zereik -
scopus.contributor.surname Babić -
scopus.contributor.surname Birk -
scopus.date.issued 2019 *
scopus.description.abstracteng In this article, we present a novel underwater dataset collected from several field trials within the EU FP7 project "Cognitive autonomous diving buddy (CADDY)", where an Autonomous Underwater Vehicle (AUV) was used to interact with divers and monitor their activities. To our knowledge, this is one of the first efforts to collect a large public dataset in underwater environments with the purpose of studying and boosting object classification, segmentation and human pose estimation tasks. The first part of the dataset contains stereo camera recordings (≈10 K) of divers performing hand gestures to communicate with an AUV in different environmental conditions. The gestures can be used to test the robustness of visual detection and classification algorithms in underwater conditions, e.g., under color attenuation and light backscatter. The second part includes stereo footage (≈12.7 K) of divers free-swimming in front of the AUV, along with synchronized measurements from Inertial Measurement Units (IMU) located throughout the diver's suit (DiverNet), which serve as ground-truth for human pose and tracking methods. In both cases, these rectified images allow the investigation of 3D representation and reasoning pipelines from low-texture targets commonly present in underwater scenarios. This work describes the recording platform, sensor calibration procedure plus the data format and the software utilities provided to use the dataset. *
scopus.description.allpeopleoriginal Chavez A.G.; Ranieri A.; Chiarella D.; Zereik E.; Babic A.; Birk A. *
scopus.differences scopus.subject.keywords *
scopus.differences scopus.description.allpeopleoriginal *
scopus.differences scopus.description.abstracteng *
scopus.document.type ar *
scopus.document.types ar *
scopus.funding.funders 501100000780 - European Commission; 100011102 - Seventh Framework Programme; 100011102 - Seventh Framework Programme; *
scopus.funding.ids 611373; *
scopus.identifier.doi 10.3390/jmse7010016 *
scopus.identifier.eissn 2077-1312 *
scopus.identifier.pui 625982156 *
scopus.identifier.scopus 2-s2.0-85060193254 *
scopus.journal.sourceid 21100830140 *
scopus.language.iso eng *
scopus.publisher.name MDPI AG *
scopus.publisher.place ;Postfach *
scopus.relation.article 16 *
scopus.relation.issue 1 *
scopus.relation.volume 7 *
scopus.subject.keywords Dataset; Field robotics; Human pose estimation; Human-robot interaction; Image processing; Marine robotics; Object classification; Stereo vision; Underwater imaging; *
scopus.title CADDY underwater Stereo-Vision dataset for human-robot interaction (HRI) in the context of diver activities *
scopus.titleeng CADDY underwater Stereo-Vision dataset for human-robot interaction (HRI) in the context of diver activities *
Appare nelle tipologie: 01.01 Articolo in rivista
File in questo prodotto:
File Dimensione Formato  
prod_398298-doc_137915.pdf

accesso aperto

Descrizione: CADDY Underwater Stereo-Vision Dataset for Human-Robot Interaction (HRI) in the Context of Diver Activities
Tipologia: Versione Editoriale (PDF)
Dimensione 4.72 MB
Formato Adobe PDF
4.72 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/345428
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 51
  • ???jsp.display-item.citation.isi??? 45
social impact