Emotion Recognition Expressed on the Face By Multimodal Method using Deep Learning
Abdoul Matine Ousmane1, Tahirou Djara2, Médésu Sogbohossou3, Antoine Vianou4

1Abdoul Matine Ousmane*, Laboratoire D’electrotechnique De Télécommunication Et D’informatique Appliquée (Letia/Epac), Université D’abomey-Calavi (Uac). Institut D’innovation Technologique (Iitech),
2Tahirou Djara, Laboratoire D’electrotechnique De Télécommunication Et D’informatique Appliquée (Letia/Epac), Université D’abomey-Calavi (Uac). Institut D’innovation Technologique (Iitech),
3Médésu Sogbohossou, Laboratoire D’electrotechnique De Télécommunication Et D’informatique Appliquée (Letia/Epac), Université D’abomey-Calavi (Uac).
4Antoine Vianou, Laboratoire D’electrotechnique De Télécommunication Et D’informatique Appliquée (Letia/Epac), Université D’abomey-Calavi (Uac). Institut D’innovation Technologique (Iitech),
Manuscript received on November 20, 2019. | Revised Manuscript received on December 08, 2019. | Manuscript published on December 30, 2019. | PP: 886-891 | Volume-9 Issue-2, December, 2019. | Retrieval Number:  A1825109119/2020©BEIESP | DOI: 10.35940/ijeat.A1825.129219
Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Emotional recognition plays a vital role in the behavioral and emotional interactions between humans. It is a difficult task because it relies on the prediction of abstract emotional states from multimodal input data. Emotion recognition systems operate in three phases. A first that consists of taking input data from the real world through sensors. Then extract the emotional characteristics to predict the emotion. To do this, methods are used to exaction and classification. Deep learning methods allow recognition in different ways. In this article, we are interested in facial expression. We proceed to the extraction of emotional characteristics expressed on the face in two ways by two different methods. On the one hand, we use Gabor filters to extract textures and facial appearances for different scales and orientations. On the other hand, we extract movements of the face muscles namely eyes, eyebrows, nose and mouth. Then we make an entire classification using the convolutional neural networks (CNN) and then a decision-level merge. The convolutional network model has been training and validating on datasets.
Keywords: CNN, Deep learning, Emotion recognition, Facial expressions.