Nexus DNN for Speech and Speaker Recognition
Chittampalli Sai Prakash1, J Sirisha Devi2

1Mr. Chittampalli Sai Prakash, Department of Computer Science and Engineering Institute of Aeronautical Engineering, JNTU (H) Hyderabad, India.
2Dr. J Sirisha Devi, Department of Computer Science and Engineering Institute of Aeronautical Engineering, JNTU (H) Hyderabad, India.
Manuscript received on November 22, 2019. | Revised Manuscript received on December 15, 2019. | Manuscript published on December 30, 2019. | PP: 2004-2007 | Volume-9 Issue-2, December, 2019. | Retrieval Number:  B2963129219/2019©BEIESP | DOI: 10.35940/ijeat.B2963.129219
Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Over the years, many efforts have been made on improving recognition accuracies on Automatic speech recognition (ASR) and speaker recognition (SRE), and many different technologies have been developed. Given the close relationship between these two tasks, researchers have proposed different ways to introduce techniques developed for these tasks to each other. In this paper an open source experimental framework is proposed for speech and speaker recognition. Then a unified model, Nexus-DNN is developed that is trained jointly for speech and speaker recognition. Experimental results show that the combined model can effectively perform ASR and SRE tasks.
Keywords: Automatic speech recognition, speaker recognition, Nexus-DNN, Word Error Rate, shared hidden layers