Optimizing Blood Glucose Prediction Accuracy for Type-1 Diabetes with a Stacked LSTM Universal Model
Main Article Content
Abstract
Integrating machine learning in diabetes care has opened the door for the development of a more advanced artificial pancreas. Multiple machine learning models have been applied to diverse datasets, including real and In Silico data, to minimize prediction errors. The goal is to develop a robust system to predict glucose levels with 60 min. of prediction horizon (PH) well in advance to prevent critical medical emergencies. Extensive research has been carried out to improve the predicting accuracy of specific models. However, the interpersonal effectiveness of the model is significant for clinical acceptance and commercial viability. The proposed approach employs a universal optimized stacked LSTM model to validate its applicability across type-1 people with diabetes for blood glucose prediction. The model was trained using the Ohio dataset and tested using the Ohio testing and D1NAMO datasets. The model demonstrates an accuracy of 22.24 ± 2.71 mg/dL RMSE, and 16.21 ± 2.29 MAE, with an EGA of 97.48% accuracy in the Ohio dataset. For the D1NAMO dataset, the model shows an accuracy of 13.79 ± 4.29 mg/dL RMSE, and 10.02 ± 3.31 mg/dL MAE, with an EGA of 96.56% accuracy for a 60-minute prediction horizon. The results obtained from the universal model have surpassed the performance of the existing patient-specific model. This demonstrates the capability of machine learning-based predictors to manage blood glucose in individuals with type-1 diabetes effectively.