Please use this identifier to cite or link to this item: https://rda.sliit.lk/handle/123456789/1158
Title: Robust Speech Analysis Framework Using CNN
Authors: RUPASINGHE, L.
Alahendra, A.M.A.T.N.
Ranathunge, R. A. D. O.
Perera, P.S.D.
Kulathunge, Y. N.
Keywords: speaker identification
stress analysis
speech emotion analysis
speaker fluency analysis
audio analysis
CNN
Issue Date: 9-Dec-2021
Publisher: 2021 3rd International Conference on Advancements in Computing (ICAC), SLIIT
Abstract: Voice is the main component of human communication and learning about and recognizing somebody's behavior. By listening to people's voices, humans can recognize a person's identity, speech fluency, accent, emotions, and stress level. It is difficult to understand what the speaker is saying when Speech fluency is poor. It varies from person to person. With the help of specific information in a person's voice, we can recognize human emotion, stress level, and identity. Every person has a unique vocal feature that facilitates recognizing them from others. This proposed framework is developed to identify a person's identity, emotions, fluency in speaking, and stress level of the speaker using their voice. The proposed framework is developed using machine learning techniques, and deep learning algorithms are highlighted in this study. Convolution Neural Network (CNN) is the used deep learning algorithm, and Fast Fourier transform (FFT), (MFCC), and Random Forest are machine learning techniques. The proposed AI-based framework provides comparatively accurate results in a user-friendly way.
URI: http://rda.sliit.lk/handle/123456789/1158
ISSN: 978-1-6654-0862-2/21
Appears in Collections:3rd International Conference on Advancements in Computing (ICAC) | 2021
Research Papers - Dept of Computer Systems Engineering
Research Papers - IEEE

Files in This Item:
File Description SizeFormat 
Robust_Speech_Analysis_Framework_Using_CNN.pdf
  Until 2050-12-31
1.56 MBAdobe PDFView/Open Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.