Sign Language Recognition using Deep Learning through LSTM and CNN
DOI:
https://doi.org/10.15282/mekatronika.v5i1.9410Keywords:
Sign Language, Deaf, CNN, LSTMAbstract
This study presents the application of using deep learning to detect, recognize and translate sign language. Understanding sign language is crucial for communication between the deaf and mute people and the general society. This helps sign language users to easily communicate with others, thus eliminating the differences between both parties. The objectives of this thesis are to extract features from the dataset for sign language recognition model and the formulation of deep learning models and the classification performance to carry out the sign language recognition. First, we develop methodology for an efficient recognition of sign language. Next is to develop multiple system using three different model which is LSTM, CNN and YOLOv5 and compare the real time test result to choose the best model with the highest accuracy. We used same datasets for all algorithms to determine the best algorithm. The YOLOv5 has achieved the highest accuracy of 97% followed by LSTM and CNN with 94% and 66.67%.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 University Malaysia Pahang Publishing
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.