Dynamic Navigation using AI for Autonomous Vehicle Systems

Main Article Content

Chaitali Chandankhede, Chinmay Mhatre, Manas Pal, Meet Dhote, Abhinav Prakash, Urmila Shrawanakar

Abstract

The rapid advancement of autonomous driving technologies offers great potential for improving safety, efficiency, and accessibility in transportation. However, current systems face challenges such as poor real-time adaptability, limited obstacle detection capabilities, and difficulty recognizing lane markings, especially in dynamic environments. Additionally, the high hardware cost and lack of user-friendly interfaces impede the widespread adoption of these technologies, particularly in resource-constrained regions. Addressing these challenges requires innovative solutions that are both robust and cost-effective.


Our research introduces an AI-powered system that addresses these issues by providing a scalable and reasonably priced dynamic navigation solution. The framework integrates state-of-the-art computer vision techniques for real-time object recognition and lane detection, utilizing the Arduino and Raspberry Pi platforms. While YOLO (You Only Look Once) allows for quick and precise object recognition, the Canny algorithm guarantees accurate lane tracking. Effective obstacle avoidance and traffic signal recognition are made possible by the hybrid model. The system also has gesture recognition capabilities, which allow for natural user engagement without the need for physical interfaces. This invention improves adaptability and usability in a variety of settings and user needs. The system's excellent performance in lane tracking, object detection, and obstacle avoidance was proven by extensive testing in both simulated and real-world scenarios. Gesture recognition further enhanced the system's usability in challenging situations.

Article Details

Section
Articles