Sign Language Recognition in Real-Time: A Deep Learning Framework for Human-Computer Interaction

Main Article Content

Madhan L, Mythily M, Beaulah David, Rexie.J.A.M., S. Deepa Kanmani, G. Naveen Sundar

Abstract

Sign language serves as a vital means of communication for individuals who are deaf, hard of hearing, or unable to speak verbally, enabling them to connect with the world through a structured system of visual and manual gestures. This project addresses the challenge of real-time sign language recognition. A custom dataset was created featuring 11 specific gestures representing commonly used symbols, captured using a webcam. The system processes these gestures in real-time, instantly displaying their corresponding letters on a screen. This initiative, rooted in human-computer interaction (HCI), focuses on enabling seamless communication through real-time hand gesture interpretation. The primary objective of this research is to analyze multiple deep learning models, including the Pre-Trained SSD MobileNet V2 architecture, Convolutional Neural Networks (CNNs), Hybrid models, and YOLO, to determine the most effective model for sign language recognition

Article Details

Section
Articles