Smart Mobility Assistant for the Visually Impaired: Integrating Machine Learning and Sensor Technology
Main Article Content
Abstract
Facilitating the safety and autonomy of visually impaired persons in traversing their environments is an essential goal. This work presents the development of a comprehensive assistive kit that leverages Raspberry Pi technology to empower visually impaired users. The kit integrates ultrasonic sensors for obstacle detection, machine-learning models for object recognition, voice alerts, facial recognition for person identification, and indoor localization using RSSI (Received Signal Strength Indicator) modules. The core functionality of the system lies in real-time environmental monitoring through ultrasonic sensors, which detect obstacles in the user's path. Upon detection, a pre-trained deep learning model identifies the nature of the obstacle—whether it be a chair, door, or other objects - and delivers auditory alerts to the user via voice synthesis. The system also employs facial recognition algorithms to identify familiar individuals, enabling users to confidently recognize and interact with acquaintances.