A Compact Deep Learning Model for Robust Facial Expression Recognition
Reji R1, Sojan lal P2, Akhil Mathew Philip3, Vishnu V4
1Reji R*, Research Scholar, School of Computer Sciences, M g University, Kottayam, Kerala, India. 
2Sojan Lal P, Principal, MBITS ,Nellimattom, Kerala . India. 
3Akhil Mathew Philip, Assistant Professor, Saintgits college of Engineering, Kottaym, Kerala, India,
4Vishnu V, P G Student, Saintgits college of Engineering, Kottaym, Kerala, India.
Manuscript received on August 03, 2019. | Revised Manuscript received on August 28, 2019. | Manuscript published on August 30, 2019. | PP: 2956-2960 | Volume-8 Issue-6, August 2019. | Retrieval Number: F8724088619/2019©BEIESP | DOI: 10.35940/ijeat.F8724.088619
Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: In this paper we are proposing a compact CNN model for facial expression recognition. Expression recognition on the low quality images are much more challenging and interesting due to the presence of low-intensity expressions. These low intensity expressions are difficult to distinguish with insufficient image resolution. Data collection for FER is expensive and time-consuming. Researches indicates the fact that downloaded images from the Internet is very useful to model and train expression recognition problem. We use extra datasets to improve the training of facial expression recognition, each representing specific data source. Moreover, to prevent subjective annotation, each dataset is labeled with different approaches to ensure annotation qualities. Recognizing the precise and exact expression from a variety of expressions of different people is a huge problem. To solve this problem, we proposed an Emotion Detection Model to extract emotions from the given input image. This work mainly focuses on the psychological approach of color circle-emotion relation[1] to find the accurate emotion from the input image. Initially the whole image is preprocessed and pixel by pixel data is studied. And the combinations of the circles based on combined data will result in a new color. This resulted color will be directly correlated to a particular emotion. Based on the psychological aspects the output will be of reasonable accuracy. The major application of our work is to predict a person’s emotion based on his face images or video frames This can even be applied for evaluating the public opinion relating to a particular movie, form the video reaction posts on social Medias. One of the diverse applications of our system is to understand the students learning from their emotions. Human beings shows their emotional states and intentions through facial expressions.. Facial expressions are powerful and natural methods that emphasize the emotional status of humans .The approach used in this work successfully exploits temporal information and it improves the accuracies on the public benchmarking databases. The basic facial expressions are happiness, fear, anger, disgust sadness, and surprise[2]. Contempt was subsequently added as one of the basic emotions. Having sufficient well labeled training data with variations of the populations and environments is important for the design of a deep expression recognition system .Behaviors, poses, facial expressions, actions and speech are considered as channels, which convey human emotions. Lot of research works are going on in this field to explore the correlation between the above mentioned channels and emotions. This paper highlights on the development of a system which automatically recognizes the emotion represented on a face. A neural network approach combined with digital image processing is used in classifying the universal basic emotions such as Happiness, Sadness, Anger, Disgust, Surprise and Fear. Colored frontal face images are given as input to the proposed system. The input face image is preprocessed and feature point extraction method is applied to extract a set of selected feature points. Finally, the set of values obtained after processing those extracted feature points are taken as input to the neural network to recognize the emotion contained .The three major steps in any automatic deep face emotion recognition model are pre-processing, deep feature learning and deep feature classification.
Keywords: Image processing, Facial expression, Deep learning, Neural networks