Purpose Machine learning with image classification has shown promise in supporting the detection of autism in children, but the development of explainable models is still lacking. To address this issue, the purpose of this study was to compare the development of explainable models using two different algorithms to identify the facial features that deep neural networks used to classify children as autistic or non-autistic. Design/methodology/approach First, this paper trained and tested different models on the Autistic Children Facial Image Data Set and selected the one that produced the highest accuracy. Following the identification of the best model, the analyses compared two methods to examine explainability: Local Interpretable Model-agnostic Explanations and Randomized Input Sampling for Explanation of black-box models. Findings Overall, the best model, ViT_Huge_14, produced an accuracy of 92%. Moreover, Local Interpretable Model-agnostic Explanations resulted in more explainable models than Randomized Input Sampling for Explanation of black-box models. Albeit promising, researchers must conduct further studies to examine the generalizability of the results and consider ethical issues before recommending facial image classification as a component of a multimethod approach to screening and diagnosis. Originality/value To the best of the authors' knowledge, this study is the first to examine the development of explainable models to detect autism using facial features.

Towards the development of explainable machine learning models to recognize the faces of autistic children: a brief report

Moroni D.
2025

Abstract

Purpose Machine learning with image classification has shown promise in supporting the detection of autism in children, but the development of explainable models is still lacking. To address this issue, the purpose of this study was to compare the development of explainable models using two different algorithms to identify the facial features that deep neural networks used to classify children as autistic or non-autistic. Design/methodology/approach First, this paper trained and tested different models on the Autistic Children Facial Image Data Set and selected the one that produced the highest accuracy. Following the identification of the best model, the analyses compared two methods to examine explainability: Local Interpretable Model-agnostic Explanations and Randomized Input Sampling for Explanation of black-box models. Findings Overall, the best model, ViT_Huge_14, produced an accuracy of 92%. Moreover, Local Interpretable Model-agnostic Explanations resulted in more explainable models than Randomized Input Sampling for Explanation of black-box models. Albeit promising, researchers must conduct further studies to examine the generalizability of the results and consider ethical issues before recommending facial image classification as a component of a multimethod approach to screening and diagnosis. Originality/value To the best of the authors' knowledge, this study is the first to examine the development of explainable models to detect autism using facial features.
2025
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI
Autism
Diagnosis
Image classification
Machine learning
Screening
File in questo prodotto:
File Dimensione Formato  
omrani_et_al_2025_advances_aam.pdf

accesso aperto

Descrizione: Post-peer-review, pre-copyedit version of an article published in Advances in Autism
Tipologia: Documento in Post-print
Licenza: Creative commons
Dimensione 639.41 kB
Formato Adobe PDF
639.41 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/556326
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact