Language Independent Emotion Quantification using Non-linear Modelling of Speech
Abstract: At present, emotion extraction from speech is a very important issue due to its diverse applications. Hence, it becomes absolutely necessary to obtain models that take into consideration the speaking styles of a person, vocal tract information, timbral qualities and other congenital information regarding his voice. Our speech production system is a nonlinear system like most other real-world systems. Hence, the need arises for modeling our speech information using nonlinear techniques. In this work, we have modeled our articulation system using nonlinear multifractal analysis. The multifractal spectral width and scaling exponents reveal essentially the complexity associated with the speech signals taken. The multifractal spectrums are well distinguishable the in low-fluctuation region in case of different emotions. The source characteristics have been quantified with the help of different nonlinear models like multifractal detrended fluctuation analysis (MFDFA), wavelet transform modulus maxima (WTMM). The results obtained from this study give a very good result in emotion clustering.
Keywords: Emotional speech, categorization, multifractal detrended fluctuation analysis (MFDFA), wavelet transform modulus maxima (WTMM)
Cite this Article: Uddalok Sarkar, Sayan Nag, Chirayata Bhattacharyaa, Shankha Sanyal, Archi Banerjee, Ranjan Sengupta, Dipak Ghosh. Language Independent Emotion Quantification using Nonlinear Modeling of Speech. Journal of Image Processing & Pattern Recognition Progress. 2019; 6(3): 24–30p.
- There are currently no refbacks.
This site has been shifted to https://stmcomputers.stmjournals.com/