Estimating Pain Intensity from Facial Expressions
Main Article Content
Abstract
Facial expressions are intricate, change over time, and are difficult to decipher, hence automated emotion recognition using facial expressions has been a popular research area in computer vision, which focuses on image processing and pattern recognition. The research community has access to many facial expression databases, which are crucial tools for analyzing a variety of face expression detection methods, but still the existing models are affected by over-fitting issues. In this research work, a framework using Hybrid optimization based deep CNN (HOA based deep CNN) has been developed that can recognize pain by facial expression and evaluate the pain's intensity. The proposed model uses UNBC-McMaster Shoulder Pain Expression Archive Database (UNBC) as the database. From the identified region of interest (ROI), feature extraction is done using the RESNET 101, facial activity descriptors and hybrid weighted facial activity descriptors. The collected features are processed using ensemble deep CNN classifiers. The optimization methods Adam, Stochastic Gradient Descent (SGD), Cat Swarm, and Grey Wolf (GWO) are used together to optimize the ensemble deep CNN. The hybrid optimization offers the best tuning, which substantially reduces the computing time and speeds up the convergence. The accuracy, sensitivity, and specificity achieved by the HOA-based deep CNN model based on the TP are 90.06%, 98.38%, and 99.35%, respectively. For the K-fold, the accuracy, sensitivity, and specificity achieved are 94.95%, 97.33%, and 99.04%, respectively.