TY - JOUR TI - Speech Emotion Recognition Based on Fusion Method JO - Journal of Information Systems and Telecommunication (JIST) JA - Iranian Academic Center for Education,Culture and Research LA - en SN - 2322-1437 AU - Sara Motame AU - Saeed Setayeshi AU - Azam Rabiee AU - Arash Sharifi AD - Assistant Professor, Department of Computer, Fouman & Shaft Unit, Islamic Azad University, Fuman, Iran AD - Amirkabir AD - Islamic Azad University Isfahan AD - Science and Research Branch, Islamic Azad University, Tehran, Iran Y1 - 2017 PY - 2017 VL _ 17 IS - 1 SP - 1 EP - 10 KW - Speech Emotion Recognition KW - Mel Frequency Cepstral Coefficient (MFCC) KW - Fixed and Variable Structures Stochastic Automata KW - Multi-constraint KW - Fusion Method DO - 10.7508/jist.2017.17.007 N2 - Speech emotion signals are the quickest and most neutral method in individuals’ relationships, leading researchers to develop speech emotion signal as a quick and efficient technique to communicate between man and machine. This paper introduces a new classification method using multi-constraints partitioning approach on emotional speech signals. To classify the rate of speech emotion signals, the features vectors are extracted using Mel frequency Cepstrum coefficient (MFCC) and auto correlation function coefficient (ACFC) and a combination of these two models. This study found the way that features’ number and fusion method can impress in the rate of emotional speech recognition. The proposed model has been compared with MLP model of recognition. Results revealed that the proposed algorithm has a powerful capability to identify and explore human emotion. UR - http://rimag.ir/fa/Article/15013 L1 - http://rimag.ir/fa/Article/Download/15013 TY -JOURId - 15013