In precision agriculture, autonomous ground and aerial vehicles can lead to favourable improvements in field operations, extending crop scouting to large fields and performing field tasks in a timely and effective way. However, automated navigation and operations within a complex scenarios require specific and robust path planning and navigation control. Thus, in addition to proper knowledge of their instantaneous position, robotic vehicles and machines require an accurate spatial description of their environment. An innovative modelling framework is presented to semantically interpret 3D point clouds of vineyards and to generate low complexity 3D mesh models of vine rows. The proposed methodology, based on a combination of convex hull filtration and minimum area c-gon design, reduces the amount of instances required to describe the spatial layout and shape of vine canopies allowing the amount of data to be reduced without losing relevant crop shape information. The algorithm is not hindered by complex scenarios, such as non-linear vine rows, as it is able to automatically process non uniform vineyards. Results demonstrated a data reduction of about 98%; from the 500 Mb ha required to store the original dataset to 7.6 Mb ha for the low complexity 3D mesh. Reducing the amount of data is crucial to reducing computational times for large original datasets, thus enabling the exploitation of 3D point cloud information in real-time during field operations. When considering scenarios involving cooperating machines and robots, data reduction will allow rapid communication and data exchange between in field actors.

Semantic interpretation and complexity reduction of 3D point clouds of vineyards

Comba Lorenzo
;
Dabbene Fabrizio
Penultimo
;
2020

Abstract

In precision agriculture, autonomous ground and aerial vehicles can lead to favourable improvements in field operations, extending crop scouting to large fields and performing field tasks in a timely and effective way. However, automated navigation and operations within a complex scenarios require specific and robust path planning and navigation control. Thus, in addition to proper knowledge of their instantaneous position, robotic vehicles and machines require an accurate spatial description of their environment. An innovative modelling framework is presented to semantically interpret 3D point clouds of vineyards and to generate low complexity 3D mesh models of vine rows. The proposed methodology, based on a combination of convex hull filtration and minimum area c-gon design, reduces the amount of instances required to describe the spatial layout and shape of vine canopies allowing the amount of data to be reduced without losing relevant crop shape information. The algorithm is not hindered by complex scenarios, such as non-linear vine rows, as it is able to automatically process non uniform vineyards. Results demonstrated a data reduction of about 98%; from the 500 Mb ha required to store the original dataset to 7.6 Mb ha for the low complexity 3D mesh. Reducing the amount of data is crucial to reducing computational times for large original datasets, thus enabling the exploitation of 3D point cloud information in real-time during field operations. When considering scenarios involving cooperating machines and robots, data reduction will allow rapid communication and data exchange between in field actors.
2020
Istituto di Elettronica e di Ingegneria dell'Informazione e delle Telecomunicazioni - IEIIT
3D point cloud segmentation, Big data, Photogrammetry, Precision agriculture,Semantic interpretation, UAV remote sensing
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S1537511020301264-main.pdf

accesso aperto

Descrizione: Semantic interpretation and complexity reduction of 3D point clouds of vineyards
Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 4.36 MB
Formato Adobe PDF
4.36 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/404205
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 19
  • ???jsp.display-item.citation.isi??? ND
social impact