Abstract:Speech Emotion Recognition (SER) is one of the essential perceptual methods of humans in understanding the situation and how to interact with others, therefore, in recent years, it has been tried to add the ability to recognize emotions to human-machine communication systems. Since the SER process relies on labeled data, databases are essential for it. Incomplete, low-quality or defective data may lead to inaccurate predictions. In this paper, we fixed the inconsistencies in Sharif Emotional Speech Database (ShEMO), as a Persian database, by using an Automatic Speech Recognition (ASR) system and investigating the effect of Farsi language models obtained from accessible Persian text corpora. We also introduced a Persian/Farsi ASR-based SER system that uses linguistic features of the ASR outputs and Deep Learning-based models.
Abstract:Speech Emotion Recognition (SER) is of great importance in Human-Computer Interaction (HCI), as it provides a deeper understanding of the situation and results in better interaction. In recent years, various machine learning and deep learning algorithms have been developed to improve SER techniques. Recognition of emotions depends on the type of expression that varies between different languages. In this article, to further study this important factor in Farsi, we examine various deep learning techniques on the SheEMO dataset. Using signal features in low- and high-level descriptions and different deep networks and machine learning techniques, Unweighted Average Recall (UAR) of 65.20 is achieved with an accuracy of 78.29.