Emotion Recognition System for Visually Impaired
E. Kodhai1, A. Pooveswari2, P. Sharmila3, N. Ramiya4

1E. Kodhai*, Associate Professor, Department of Computer Science and Engineering, Sri Manakula Vinayagar Engineering College, Puducherry, India.
2A. Pooveswari, Department of Computer Science and Engineering, Sri Manakula Vinayagar Engineering College, Puducherry, India.
3P. Sharmila, Department of Computer Science and Engineering, Sri Manakula Vinayagar Engineering College, Puducherry, India.
4
N. Ramiya, Department of Computer Science and Engineering, Sri Manakula Vinayagar Engineering College, Puducherry, India. 
Manuscript received on April 05, 2020. | Revised Manuscript received on April 25, 2020. | Manuscript published on April 30, 2020. | PP: 226-230 | Volume-9 Issue-4, April 2020. | Retrieval Number: D6733049420/2020©BEIESP | DOI: 10.35940/ijeat.D6733.049420
Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Machine learning is one of the current technologies that use computers to perform tasks similar to humans. It is adopted in many applications like face recognition, Chabots, self-driving cars etc. This work focuses on emotion recognition which is part of computer vision technology. Emotion recognition is mainly used in cybersecurity, online shopping, police investigations, and interview process and so on. In this paper, an emotion recognition system is built for the visually impaired people. The blind people cannot recognize the facial expressions of the person interacting with them. These people can be provided with a device that can recognize the emotions of people through a camera and conveys the kind of emotion via headphones. The system will be made using a Raspberry Pi computer to perform the entire task and it is portable for the user. The emotion recognition model will be trained using convolution neural network (CNN) with the fer2013 dataset that contains more than 30,000 images. The human face is detected using the OpenCV library and some features like Histogram of Oriented gradients (HOG) are also passed with input images for better accuracy. The recognized emotion is then converted to a speech using a python library Pyttsx3 that make use of eSpeak engine. 
Keywords: Facial Expressions, FER2013, Raspberry Pi, Convolution neural network(CNN).