Advanced Disease Prediction using Multimodal Features and Hybrid Learning Models
Main Article Content
Abstract
This Paper introduces a hybrid machine learning framework designed for multimodal disease prediction, combining structured data, imaging data, and unstructured text data. The framework employs an innovative fusion technique, wherein a deep learning network processes the image data, and a gradient boosting machine handles the structured data. The hybrid approach also includes Utilizing natural language processing to derive insights from clinical documentation. This multimodal integration leads to a comprehensive prediction model, yielding high precision in predicting diseases such as Alzheimer's, lung cancer, and hypertension, ultimately improving the decision-making process in clinical settings. The model's capacity to effortlessly incorporate various data sources significantly strengthens its reliability and relevance in practical healthcare situations.