Speech emotion recognition via multiple fusion under spatial–temporal parallel network Gan, Chenquan García López, Salvador Speech emotion recognition Speech spectrum Spatial–temporal parallel network Multiple fusion The authors are grateful to the anonymous reviewers and the editor for their valuable comments and suggestions. This work was supported by the National Natural Science Foundation of China (No. 61702066), the Chongqing Research Program of Basic Research and Frontier Technology, China (No. cstc2021jcyj-msxmX0761) and partially supported by Project PID2020-119478GB-I00 funded by MICINN/AEI/10.13039/501100011033 and by Project A-TIC-434- UGR20 funded by FEDER/Junta de Andalucía Consejería de Transformación Económica, Industria, Conocimiento Universidades. Speech, as a necessary way to express emotions, plays a vital role in human communication. With the continuous deepening of research on emotion recognition in human-computer interaction, speech emotion recognition (SER) has become an essential task to improve the human-computer interaction experience. When performing emotion feature extraction of speech, the method of cutting the speech spectrum will destroy the continuity of speech. Besides, the method of using the cascaded structure without cutting the speech spectrum cannot simultaneously extract speech spectrum information from both temporal and spatial domains. To this end, we propose a spatial-temporal parallel network for speech emotion recognition without cutting the speech spectrum. To further mix the temporal and spatial features, we design a novel fusion method (called multiple fusion) that combines the concatenate fusion and ensemble strategy. Finally, the experimental results on five datasets demonstrate that the proposed method outperforms state-of-the-art methods. 2023-10-23T10:15:04Z 2023-10-23T10:15:04Z 2023-10-28 journal article C. Gan et al. Speech emotion recognition via multiple fusion under spatial–temporal parallel network. Neurocomputing 555 (2023) 126623. [https://doi.org/10.1016/j.neucom.2023.126623] https://hdl.handle.net/10481/85177 10.1016/j.neucom.2023.126623 eng http://creativecommons.org/licenses/by/4.0/ open access Atribución 4.0 Internacional Elsevier