Wavelet-Based Radiographic Feature Fusion with Multi-Head Attention for Early Detection of Neurodegenerative Disorders

Main Article Content

Yamjala Arjun Sagar, G. ShankarLingam

Abstract

This study introduces a robust multimodal deep learning framework aimed at the early detection and classification of neurodegenerative diseases, specifically Alzheimer's Disease (AD) and Parkinson's Disease (PD). Leveraging grayscale brain imaging data, the approach integrates discrete wavelet transform (DWT) to extract handcrafted features that encapsulate spatial-frequency domain characteristics. The proposed architecture comprises a dual-branch neural network: a convolutional neural network (CNN) branch processes raw images, while a fully connected network handles the 2048-dimensional wavelet features, projecting them into a unified 128-dimensional latent space. A multi-head attention mechanism is employed to fuse these modalities, enhancing salient features and suppressing irrelevant ones. The model was trained over 10 epochs using full-batch gradient descent, achieving a peak validation accuracy of 81.21% and a final training accuracy of 76.87%. Class-wise performance metrics revealed F1-scores of 0.85 for AD, 0.95 for PD, and 0.89 for healthy controls. An ablation study underscored the significance of each component, with the full model attaining an accuracy of 85%, outperforming variants lacking either modality or the attention mechanism.

Article Details

Section
Articles