Individuals with visual impairments face significant challenges navigating environments, especially with tasks such as object identification and traversing unfamiliar spaces. Often, their needs are inadequately addressed, leading to applications that do not meet their specific requirements. Traditional object detection models frequently lack this demographic's accuracy, speed, and efficiency. However, recent Internet of Things (IoT) advancements offer promising solutions, providing real-time guidance and alerts about potential hazards through IoT-enabled navigation apps and smart city infrastructure. This paper presents an extension of our MoSIoT framework, incorporating the YOLOv8 convolutional neural network for precise object detection and a specialized decision layer to improve environmental understanding. Additionally, advanced distance measurement techniques are incorporated to provide crucial information on object proximity. Our model demonstrates increased efficiency and adaptability across diverse environments using transfer learning and robust regularization techniques. Systematic evaluation indicates significant improvements in object detection accuracy, measured by mean Average Precision at 50% Intersection over Union (mAP50) from 0.44411 to 0.51809 and mAP50-95 from 0.24936 to 0.29586 for visually impaired individuals, ensuring reliable real-time feedback for safe navigation. These enhancements significantly improve the MoSIoT framework, thereby greatly enhancing accessibility, safety, independence, and mobility for users with visual impairments.

Empowering visual navigation: a deep-learning solution for enhanced accessibility and safety among the visually impaired

Leporini B.;
2025

Abstract

Individuals with visual impairments face significant challenges navigating environments, especially with tasks such as object identification and traversing unfamiliar spaces. Often, their needs are inadequately addressed, leading to applications that do not meet their specific requirements. Traditional object detection models frequently lack this demographic's accuracy, speed, and efficiency. However, recent Internet of Things (IoT) advancements offer promising solutions, providing real-time guidance and alerts about potential hazards through IoT-enabled navigation apps and smart city infrastructure. This paper presents an extension of our MoSIoT framework, incorporating the YOLOv8 convolutional neural network for precise object detection and a specialized decision layer to improve environmental understanding. Additionally, advanced distance measurement techniques are incorporated to provide crucial information on object proximity. Our model demonstrates increased efficiency and adaptability across diverse environments using transfer learning and robust regularization techniques. Systematic evaluation indicates significant improvements in object detection accuracy, measured by mean Average Precision at 50% Intersection over Union (mAP50) from 0.44411 to 0.51809 and mAP50-95 from 0.24936 to 0.29586 for visually impaired individuals, ensuring reliable real-time feedback for safe navigation. These enhancements significantly improve the MoSIoT framework, thereby greatly enhancing accessibility, safety, independence, and mobility for users with visual impairments.
2025
Istituto di Scienza e Tecnologie dell'Informazione "Alessandro Faedo" - ISTI
9789819605729
9789819605736
Accessibility
Artificial Intelligence
Assistive technology
Internet of Things
Visually Impaired
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/520184
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact