Optimizing Autonomous Drone Navigation via YOLOv5 for Real-Time Obstacle Avoidance
Main Article Content
Abstract
In recent years, drone technology has seen profound advancements, especially with regards to safe and autonomous operation, which heavily relies on object detection and avoidance capabilities. These autonomously functioning drones can operate in challenging environments for tasks like search and rescue operations, as well as industrial monitoring. The present research focuses on enhancing object detection for autonomous drones by utilizing publicly available image datasets instead of custom images. Datasets like VisDrone, DroneDeploy, and DOTA contain a plethora of stunning, real-life images that make them ideal candidates for improving the accuracy and robustness of object detection models. We propose an optimized method for training the YOLOv5 model to enhance object detection. The collected dataset undergoes evaluation from precision, recall, F1-score, and mAP through both CNN and YOLO models. The findings show that using YOLOv5 deep architecture to implement real-time object detection and avoidance in UAVs is more efficient than traditional CNN approaches.