Hand Gesture Recognition using Deep Learning
A. Geetha Devi1, M. Aparna2, N. Mounika3, U. Pavan kalian4, R. Meghna Nath5

1A. Geetha Devi*, Associate Professor, Department of ECE, PVP Siddhartha Institute of Technology, Kanuru, Vijayawada, India.
2Mekala Aparna, Department of ECE, PVP Siddhartha Institute of Technology, Kanuru, Vijayawada, India.
2Nagothi Mounika, Department of ECE, PVP Siddhartha Institute of Technology, Kanuru, Vijayawada, India.
3Udayagiri Pavan Kalyan, Department of ECE, PVP Siddhartha Institute of Technology, Kanuru, Vijayawada, India.
4Ramapuram Meghna Nath, Department of ECE, PVP Siddhartha Institute of Technology, Kanuru, Vijayawada, India.

Manuscript received on March 05, 2020. | Revised Manuscript received on March 16, 2020. | Manuscript published on April 30, 2020. | PP: 455-468 | Volume-9 Issue-4, April 2020. | Retrieval Number: D6765049420/2020©BEIESP | DOI: 10.35940/ijeat.D6765.049420
Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Hearing impaired individuals use sign languages to communicate with others within the community. Because of the wide spread use of this language, hard-of-hearing individuals can easily understand it but it is not known by a lot of normal people. In this paper a hand gesture recognition system has been developed to overcome this problem, for those who don’t recognize sign language to communicate simply with hard-of-hearing individuals. In this paper a computer vision-based system is designed to detect sign Language. Datasets used in this paper are binary images. These images are given to the convolution neural network (CNN). This model extracts the features of the image and classifies the images, and it recognises the gestures. The gestures used in this paper are of American Sign Language. In real time system the images are converted to binary images using Hue, Saturation, and Value (HSV) colour model. In this model 87.5% of data is used for training and 12.5% of data is used for testing and the accuracy obtained with this model is 97%.
Keywords: Sign language, HSV colour model, Convolution Neural Networks.