Total views : 179

Efficient Feature Extraction for Fear State Analysis from Human Voice


  • Siksha ‘O’ Anusandhan University, Near PNB Bank Jagmohan Nagar, Khandagiri, Bhubaneswar - 751030, Odisha, India


Background/Objectives: Analysis of human speech emotion has been continued since long. As the study and recognition helps the society in many respects, we intend to analyze the similar type of emotions. Methods/Statistical Analysis: ‘Fear’ and ‘Nervousness’ are being analyzed in comparison with normal voice. The correlation between these two emotions found to be very close. These voices belong to Oriya language. The popular features of speech, Mel-frequency cepstral coefficients (MFCCs) are used. As the fundamental frequency is unique from voice to voice, it is a suitable feature in case of similar voice signals. Findings: The combination of these two features outperformed the single feature based classification. In addition, the performance has been measured using log-likelihood ratio parameter. For recognition purpose, Gaussian mixture model (GMM) has been selected, and tested for these features. Novelty/Improvement: The individual MFCCs show 81.33%, whereas the combined features show 86.01% of accuracy. It is clearly evidenced in the result section.


Correlation Coefficient, Feature Extraction, Fear State, Gaussian Mixture Model, Human Voice, Log-likelihood Ratio, Mel-frequncy Cepstral Coefficient.

Full Text:

 |  (PDF views: 180)


  • Wang JC, Chin YH, Chen B-W, Lin CH, Wu CH. Speech Emotion Verification Using Emotion Variance Modeling and Discriminant Scale-Frequency Maps. IEEE/ACM Transactions on Audio, Speech, and Language Processing. 2015 Oct; 23(10):1552–62.
  • McDuff D, Karlson A, Kapoor A, Roseway A, Czerwinski M. AffectAura: Emotional wellbeing reflection system. Proceedings 6th International conference Pervasive Computing Technolologies for Healthcare. 2012 May.
  • Prakash NS, Venkatram N. Establishing Efficient Security Scheme in Home IOT Devices through Biometric Finger Print Technique. Indian Journal of Science and Technology. 2016 May; 9(17):1-8.
  • Vaid S, Singh P, Kaur C. Classification of Human Emotions using Multiwavelet Transform based Features and Random Forest Technique. Indian Journal of Science and Technology. 2015 Oct; 8(28):1-7.
  • Rajkumar N, Ramalingam V. Cognitive Intelligent Tutoring System based on Affective State. Indian Journal of Science and Technology. 2015 Sep; 8(24):1–10.
  • Clavel C, Vasilescu I, Devillers L, Richard G, Ehrette T. Fear-type emotion recognition for future audio-based surveillance systems. Speech Communication. 2008 Jun; 50(6):487–503.
  • Palo HK, Mohanty MN. Classification of Emotions of Angry and Disgust. Smart Computing Review. 2015 Jun; 5(3):151-58.
  • Wang K, An N, Li BN, Zhang Y. Speech emotion recognition using Fourier parameters. IEEE Transaction on Affective Computing. 2015 Jan-Mar; 6(1):69-75.
  • Palo HK, Mohanty MN. Classification of Emotional Speech of Children Using Probabilistic Neural Network. International Journal of Electrical and Computer Engineering (IJECE). 2015 Apr; 5(2):311-17.
  • Ververidis D, Kotropoulos C. Emotional speech recognition: Resources, features and methods. Speech Communication. 2006 Sep; 48(9):1162-81.
  • Yildirim S, Bulut M, Lee CM, Kazemzadeh A, Busso C, Deng Z, Lee S, Narayanan S. An acoustic study of emotions expressed in speech. Proceedings of International conference on Spoken Language Processing (ICSLP ’04). 2004 Jan; 1:2193-96.
  • Wu S, Tiago HF, Wai-Yip C. Automatic speech emotion recognition using modulation spectral features. Speech Communication. 2011 May-Jun; 53(5):768-85.
  • Lanjewar RB, Chaudhari DS. Comparative analysis of speech emotion recognition system using different classifiers on berlin emotional speech database. International Journal of Electrical and Electronics Engineering Research (IJEEER). 2013 Dec; 3(5):145-56.
  • Ayadi E, Kamal MS, Karray F. Survey on speech emotion recognition: features, classification schemes, and databases. Pattern Recognition. 2011 Sep; 44(3):572-87.
  • Palo HK, Mohanty MN, Chandra M. Efficient feature combination techniques for emotional speech classification. International Journal of Speech Technology. 2016 Mar; 19(1):135-50.
  • Detecting the emotion fear through voice. Date Accessed: 25/06/2008: Available from:
  • Devillers L, Vasilescu I, Vidrascu L. F0 and pause features analysis for Anger and Fear detection in real-life spoken dialogs. Speech Prosody 2004 Nara, Japan. 2004 Mar; p. 1-4.
  • Cairns DA, Hansen JHL. Nonlinear analysis and classification of speech under stressed conditions. Acoustical Society of America. 1994 Dec; 96(6):3392-400.
  • Mohanty MN, Jena B. Analysis of stressed human speech. International Journal of Computational Vision and Robotics. 2011 Sep; 2(2):180-87.
  • Kumar P, Chandra M. Pitch-based cepstral features for gender classification in noisy environments. International Journal Signal and Imaging Systems Engineering. 2013; 6(3):138-42.
  • Lu X, Dang J. Physiological feature extraction for text-independent speaker identification using non-uniform subband processing. 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP ‘07. 2007 Apr; p. IV-461-IV-464.
  • Prabhu V, Gunasekaran G. Fuzzy Logic based NAM Speech Recognition for Tamil Syllables. Indian Journal of Science and Technology. 2016 Jan; 9(1):1-12.
  • Palo HK, Mohanty MN, Chandra M. New Features for Emotional Speech Recognition. IEEE Power, Communication and Information Technology Conference (PCITC). 2015 Oct; p. 424-29.
  • Casella G, Berger RL. Statistical Inference. Cengage Learning, Second edition. 2001 Jun.
  • Mood AM, Graybill FA. McGraw-Hill: Introduction to the Theory of Statistics. Second edition. 1963.
  • Rabiner LR. On the Use of Autocorrelation Analysis for Pitch Detection. IEEE transactions on acoustics, speech, and signal processing. 1977 Feb; 25(1):24-33.
  • Subhashree R, Rathna GN. Speech Emotion Recognition: Performance Analysis based on Fused Algorithms and GMM Modelling. Indian Journal of Science and Technology. 2016 Mar; 9(11):1-8.


  • There are currently no refbacks.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.