American Sign Language to Text – Speech using Background Subtraction using Running Averages
Jyoti Tripathi1, Prafull Goel2, Raman Bhadauria3, Nikhil Yadav4, Keshav Gupta5

1Jyoti Tripathi, Department of Computer Science and Engineering G.B. Pant Government Engineering College, New Delhi, India.
2Prafull Goel, Department of Computer Science and Engineering G.B. Pant Government Engineering College, New Delhi, India.
3Raman Bhadauria, Department of Computer Science and Engineering G.B. Pant Government Engineering College, New Delhi, India.
4Nikhil Yadav, Department of Computer Science and Engineering G.B. Pant Government Engineering College, New Delhi, India.
5Keshav Gupta, Department of Computer Science and Engineering G.B. Pant Government Engineering College, New Delhi, India.
Manuscript received on November 22, 2019. | Revised Manuscript received on December 15, 2019. | Manuscript published on December 30, 2019. | PP: 2150-2156 | Volume-9 Issue-2, December, 2019. | Retrieval Number:  B3621129219/2019©BEIESP | DOI: 10.35940/ijeat.B3621.129219
Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: This Paper Proposes A System Which Converts American Sign Language Hand Gestures Into Text Cum Speech And Helps To Bridge The Communication Gap Between DeafMute People And Rest Of The Society. Any System For This Purpose Generally Has Four Modules: Segmentation, Feature Extraction, Classification And Text-To-Speech. This Paper Focuses On An Improved Method For The Segmentation And The Feature Extraction Processes To Get More Better Resultswhile Using The Standard Techniques On The Other Two Modules. Proposed Algorithm Captures Initial 30 Frames Of The Live Video From The Web Cam Of The System To Construct The Background Model. It Then Finds The Absolute Difference Between The Current Frame And The Background Model In Order To Get The Foreground. Various Features Are Extracted To Classify The Gestures Like Contour, Convexity Hull Etc.. Proposed Algorithm Has Been Tested Under Low And Normal Room Light Conditions. The Overall Performance Of The Proposed Model Will Be Very High And Will Produce Far More Better Resultsdue To Improved Proposed Algorithms For The Initial Two Modules In Comparison To Other Standard Techniques Used Like Hsv, Ycbcr The Above System Can Be Incorporated Into Simple Web Applications, Mobile Applications And Many Other Applications Translating Gestures In The Conversations In Real Time.
Keywords: ASL, Background Subtraction, Running Averages, Segmentation, Feature Extraction, HSV, YCbCr.