Speech Emotion Recognition using Cross Correlational Database with Feature Fusion Methodology
G. Chandrika Sri Lakshmi1, K. Sri Sundeep2, G. Yaswanth3, Niranjan S R Mellacheruvu4, Swarna Kuchibhotla5, Venkata Naresh Mandhala6

1G. Chandrika Sri Lakshmi, B.Tech, Department of CSE, Koneru Lakshmaiah Education Foundation, Greenfields, Vaddeswaram, Guntur (A.P), India.
2K. Sri Sundeep, B.Tech, Department of CSE, Koneru Lakshmaiah Education Foundation, Greenfields, Vaddeswaram, Guntur (A.P), India.
3G. Yaswanth, B.Tech , Department of CSE, Koneru Lakshmaiah Education Foundation, Greenfields, Vaddeswaram, Guntur (A.P), India.
4Niranjan S R Mellacheruvu, Asst Professor, Department of ECE, Vikas College of Engineering and Technology, Vijayawada rural, Nunna (A.P), India.
5Swarna Kuchibhotla, Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Greenfields, Vaddeswaram, Guntur (A.P), India.
6Venkata Naresh Mandhala, Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Greenfields, Vaddeswaram, Guntur (A.P), India.

Manuscript received on 18 April 2019 | Revised Manuscript received on 25 April 2019 | Manuscript published on 30 April 2019 | PP: 1868-1874 | Volume-8 Issue-4, April 2019 | Retrieval Number: D7008048419/19©BEIESP
Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Speech emotion recognition provides an interface for communication between the human and the machine. Classifying the emotion based on the speech signals is not that easy task since we need to take into account the conditions like noisy data, changes in voice due to cold/cough and so on because the voice of a person will not be the same when he/she is suffering from cold/cough, or when he/she consumed alcohol. In this paper we just extracted some of the features like Volume, Energy, MFCC, and Pitch in order to classify the emotion into happy/sad/anger/neutral. In this paper MFCC plays a major role for classifying the emotions into happy/anger/sad/neutral. The concept of Cross-Correlation is that we first make use of Berlin Database and train the model using Berlin database and then we will test the same model using Spanish Database. The main role is that to test whether the model taken produces the same output (emotion) for both the Spanish and Berlin Databases that is we need to prove that the model taken is independent of the language used. Accordingly, a function is developed in MATLAB for Identification of an Emotion for any Audio File given as an input [1].
Keywords: MFCC (Mel Frequency Cepstral coefficient), SVM (Support Vector Machine), RDA (Regularized Discriminant Analysis), LDA (Linear Discriminant Analysis), kNN (k Nearest Neighbor).

Scope of the Article: Frequency Selective Surface