Identification of Hand Gestures for Sign Language Interpretation Through LSTM and GRU
Author(s)
S.Pooja
Published Date
November 13, 2024
DOI
your-doi-here
Volume / Issue
Vol. 19 / Issue 2
Abstract
In the past, people who have hearing disability have been neglected and they unable to access the resources that would enable them to communicate effectively. The developments in contemporary technology contain wide range of instruments and applications, which have been created with the intention of enhancing the lives of those who are hard of hearing. This work uses four machine learning algorithms designed by thinking to identify hand movements for the American Sign Language (ASL) alphabet. It is a thorough investigation. The main goal of this research is to use modern methods to eradicate the communication gap between those who have hearing disability and others who do not. The models used in this study, which included two-layer LSTM and GRU, GRU then LSTM, and LSTM then GRU, were trained and evaluated using a large dataset that included more than 87,000 photos of hand motions associated with the ASL alphabet. Extensive studies were carried out whereby the models' architectural design characteristics were altered in order to get the highest possible identification accuracy. Our study's experimental findings showed that, out of all the models, GRU and LSTM combined to reach an amazing accuracy rate of 99.04%. Two-layer GRU produced an accuracy rate of 98.56%, two-layer LSTM produced the lowest accuracy of 98.56%, and LSTM followed GRU in achieving an accuracy rate of 98.78%.
View Full Article
Download or view the complete article PDF published by the author.