Plant phenotyping, that is, the quantitative assessment of plant traits including growth, morphology, physiology, and yield, is a critical aspect towards efficient and effective crop management. Currently, plant phenotyping is a manually intensive and time consuming process, which involves human operators making measurements in the field, based on visual estimates or using hand-held devices. In this work, methods for automated grapevine phenotyping are developed, aiming to canopy volume estimation and bunch detection and counting. It is demonstrated that both measurements can be effectively performed in the field using a consumer-grade depth camera mounted on-board an agricultural vehicle. First, a dense 3D map of the grapevine row, augmented with its color appearance, is generated, based on infrared stereo reconstruction. Then, different computational geometry methods are applied and evaluated for plant per plant volume estimation. The proposed methods are validated through field tests performed in a commercial vineyard in Switzerland. It is shown that different automatic methods lead to different canopy volume estimates meaning that new standard methods and procedures need to be defined and established. Four deep learning frameworks, namely the AlexNet, the VGG16, the VGG19 and the GoogLeNet, are also implemented and compared to segment visual images acquired by the RGBD sensor into multiple classes and recognize grape bunches. Field tests are presented showing that, despite the poor quality of the input images, the proposed methods are able to correctly detect fruits, with a maximum accuracy of 91.52%, obtained by the VGG19 deep neural network.
In-field high throughput grapevine phenotyping with a consumer-grade depth camera
Milella Annalisa;Marani Roberto;Petitti Antonio;
2019
Abstract
Plant phenotyping, that is, the quantitative assessment of plant traits including growth, morphology, physiology, and yield, is a critical aspect towards efficient and effective crop management. Currently, plant phenotyping is a manually intensive and time consuming process, which involves human operators making measurements in the field, based on visual estimates or using hand-held devices. In this work, methods for automated grapevine phenotyping are developed, aiming to canopy volume estimation and bunch detection and counting. It is demonstrated that both measurements can be effectively performed in the field using a consumer-grade depth camera mounted on-board an agricultural vehicle. First, a dense 3D map of the grapevine row, augmented with its color appearance, is generated, based on infrared stereo reconstruction. Then, different computational geometry methods are applied and evaluated for plant per plant volume estimation. The proposed methods are validated through field tests performed in a commercial vineyard in Switzerland. It is shown that different automatic methods lead to different canopy volume estimates meaning that new standard methods and procedures need to be defined and established. Four deep learning frameworks, namely the AlexNet, the VGG16, the VGG19 and the GoogLeNet, are also implemented and compared to segment visual images acquired by the RGBD sensor into multiple classes and recognize grape bunches. Field tests are presented showing that, despite the poor quality of the input images, the proposed methods are able to correctly detect fruits, with a maximum accuracy of 91.52%, obtained by the VGG19 deep neural network.File | Dimensione | Formato | |
---|---|---|---|
prod_397785-doc_164794.pdf
solo utenti autorizzati
Descrizione: In-field high throughput grapevine phenotyping with a consumer-grade depth camera
Tipologia:
Versione Editoriale (PDF)
Dimensione
4.81 MB
Formato
Adobe PDF
|
4.81 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
prod_397785-doc_186369.pdf
accesso aperto
Descrizione: In-field high throughput grapevine phenotyping with a consumer-grade depth camera
Tipologia:
Versione Editoriale (PDF)
Dimensione
3.34 MB
Formato
Adobe PDF
|
3.34 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.