Automatic extraction of facial feature can provide valuable information on the health of newborns. However, determining an optimal facial features extraction strategy, especially for preterm infants, is a challenging task due to significant differences in facial morphology and frequent pose changes. In this work, we collected video data from 10 newborns (8 preterm, 2 at term, ≤ 4 weeks post term equivalent age), obtaining a novel dataset of over 41, 000 labeled frames (Open Mouth, Closed Mouth, Tongue Protrusion). On the collected images, we applied a strong data preparation procedure (including mouth localization, cropping, and reorientation with models trained on adults), an adaptive image normalization strategy, and a proper data augmentation scheme. Thus, we trained a convolutional classifier with a large number of trainable parameters (i.e., ~1.2 million), coupled with multiple criteria to avoid overspecialization and consequent loss of generalization capability. This approach allows for highly reliable results (accuracy, precision, and recall over 92% on unseen data) and generalizes well to newborns with significantly different characteristics, even without including time-dependent information in the analysis. Therefore, these results prove that proper data preparation can narrow the gap between the classification of neonatal and adult facial features, allowing the integration of methods originally developed for adults into the complex setting of preterm infant analysis.
Facial landmark identification and data preparation can significantly improve the extraction of newborns' facial features
Del Corso G.;Germanese D.;Pascali M. A.;Colantonio S.
2024
Abstract
Automatic extraction of facial feature can provide valuable information on the health of newborns. However, determining an optimal facial features extraction strategy, especially for preterm infants, is a challenging task due to significant differences in facial morphology and frequent pose changes. In this work, we collected video data from 10 newborns (8 preterm, 2 at term, ≤ 4 weeks post term equivalent age), obtaining a novel dataset of over 41, 000 labeled frames (Open Mouth, Closed Mouth, Tongue Protrusion). On the collected images, we applied a strong data preparation procedure (including mouth localization, cropping, and reorientation with models trained on adults), an adaptive image normalization strategy, and a proper data augmentation scheme. Thus, we trained a convolutional classifier with a large number of trainable parameters (i.e., ~1.2 million), coupled with multiple criteria to avoid overspecialization and consequent loss of generalization capability. This approach allows for highly reliable results (accuracy, precision, and recall over 92% on unseen data) and generalizes well to newborns with significantly different characteristics, even without including time-dependent information in the analysis. Therefore, these results prove that proper data preparation can narrow the gap between the classification of neonatal and adult facial features, allowing the integration of methods originally developed for adults into the complex setting of preterm infant analysis.File | Dimensione | Formato | |
---|---|---|---|
printed_version_Facial_Landmark_Identification_and_Data_Preparation_Can_Significantly_Improve_the_Extraction_of_Newborns_Facial_Features.pdf
solo utenti autorizzati
Descrizione: Facial Landmark Identification and Data Preparation Can Significantly Improve the Extraction of Newborns' Facial Features
Tipologia:
Versione Editoriale (PDF)
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
5.2 MB
Formato
Adobe PDF
|
5.2 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
FG2024_DelCorso.pdf
accesso aperto
Descrizione: Facial Landmark Identification and Data Preparation Can Significantly Improve the Extraction of Newborns' Facial Features
Tipologia:
Documento in Post-print
Licenza:
Altro tipo di licenza
Dimensione
5.15 MB
Formato
Adobe PDF
|
5.15 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.