Mostrar el registro sencillo del ítem

dc.contributor.authorGan, Chenquan
dc.contributor.authorGarcía López, Salvador 
dc.date.accessioned2023-10-23T10:15:04Z
dc.date.available2023-10-23T10:15:04Z
dc.date.issued2023-10-28
dc.identifier.citationC. Gan et al. Speech emotion recognition via multiple fusion under spatial–temporal parallel network. Neurocomputing 555 (2023) 126623. [https://doi.org/10.1016/j.neucom.2023.126623]es_ES
dc.identifier.urihttps://hdl.handle.net/10481/85177
dc.descriptionThe authors are grateful to the anonymous reviewers and the editor for their valuable comments and suggestions. This work was supported by the National Natural Science Foundation of China (No. 61702066), the Chongqing Research Program of Basic Research and Frontier Technology, China (No. cstc2021jcyj-msxmX0761) and partially supported by Project PID2020-119478GB-I00 funded by MICINN/AEI/10.13039/501100011033 and by Project A-TIC-434- UGR20 funded by FEDER/Junta de Andalucía Consejería de Transformación Económica, Industria, Conocimiento Universidades.es_ES
dc.description.abstractSpeech, as a necessary way to express emotions, plays a vital role in human communication. With the continuous deepening of research on emotion recognition in human-computer interaction, speech emotion recognition (SER) has become an essential task to improve the human-computer interaction experience. When performing emotion feature extraction of speech, the method of cutting the speech spectrum will destroy the continuity of speech. Besides, the method of using the cascaded structure without cutting the speech spectrum cannot simultaneously extract speech spectrum information from both temporal and spatial domains. To this end, we propose a spatial-temporal parallel network for speech emotion recognition without cutting the speech spectrum. To further mix the temporal and spatial features, we design a novel fusion method (called multiple fusion) that combines the concatenate fusion and ensemble strategy. Finally, the experimental results on five datasets demonstrate that the proposed method outperforms state-of-the-art methods.es_ES
dc.description.sponsorshipNational Natural Science Foundation of China 61702066es_ES
dc.description.sponsorshipChongqing Research Program of Basic Research and Frontier Technology, China cstc2021jcyj-msxmX0761es_ES
dc.description.sponsorshipMICINN/AEI/10.13039/501100011033: PID2020-119478GB-I00es_ES
dc.description.sponsorshipFEDER/Junta de Andalucía A-TIC-434- UGR20es_ES
dc.language.isoenges_ES
dc.publisherElsevieres_ES
dc.rightsAtribución 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectSpeech emotion recognitiones_ES
dc.subjectSpeech spectrumes_ES
dc.subjectSpatial–temporal parallel networkes_ES
dc.subjectMultiple fusiones_ES
dc.titleSpeech emotion recognition via multiple fusion under spatial–temporal parallel networkes_ES
dc.typejournal articlees_ES
dc.rights.accessRightsopen accesses_ES
dc.identifier.doi10.1016/j.neucom.2023.126623
dc.type.hasVersionVoRes_ES


Ficheros en el ítem

[PDF]

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Atribución 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como Atribución 4.0 Internacional