Hybrid Deep Learning Framework for Efficient Aerial Image Classification using Aid Dataset

Main Article Content

Pradnya P. Parate, Shashank Shriramwar

Abstract

Aerial image classification is a crucial task in remote sensing applications like land use analysis, urban planning, and environmental studies. However, the task of achieving high classification accuracy with efficient computation is still a challenging problem because of the complex variations and high intra-class diversity in aerial images. This paper proposes a hybrid deep learning approach for efficient aerial image classification on the Aerial Image Dataset (AID). To begin with, a set of intensive preprocessing steps such as image resizing, normalization, and data augmentation are performed to improve the quality of the data and enhance generalization. Robust feature extraction is further carried out using a feature fusion approach that combines deep features from three different convolutional neural networks: ResNet50, VGG16, and InceptionV3, to extract complementary information from aerial images. The fused feature vectors are then classified using ensemble learning methods, namely bagging and boosting, to enhance the robustness of classification and prevent overfitting. The proposed framework is tested on the AID dataset using accuracy, precision, recall, F1-measure, and confusion matrix analysis. The experimental results show that the hybrid deep learning method performs significantly better than the individual deep models in terms of classification accuracy and robustness on various aerial scene categories. The results validate the effectiveness of the proposed deep feature fusion and ensemble learning method for scalable and accurate aerial image classification.

Article Details

Section
Articles