Mostrar el registro sencillo del ítem

dc.contributor.authorMartín Doñas, Juan M.
dc.contributor.authorPeinado Herreros, Antonio Miguel 
dc.contributor.authorGómez García, Ángel Manuel 
dc.date.accessioned2024-02-09T07:52:29Z
dc.date.available2024-02-09T07:52:29Z
dc.date.issued2022
dc.identifier.urihttps://hdl.handle.net/10481/88765
dc.description.abstractSpeech-related applications on mobile devices require highperformance speech enhancement algorithms to tackle challenging, noisy real-world environments. In addition, current mobile devices often embed several microphones, allowing them to exploit spatial information. The main goal of this Thesis is the development of online multichannel speech enhancement algorithms for speech services in mobile devices. The proposed techniques use multichannel signal processing to increase the noise reduction performance without degrading the quality of the speech signal. Moreover, deep neural networks are applied in specific parts of the algorithm where modeling by classical methods would be, otherwise, unfeasible or very limiting. Our contributions focus on different noisy environments where these mobile speech technologies can be applied. These include dualmicrophone smartphones in noisy and reverberant environments and general multi-microphone devices for speech enhancement and target source separation. Moreover, we study the training of deep learning methods for speech processing using perceptual considerations. Our contributions successfully integrate signal processing and deep learning methods to exploit spectral, spatial, and temporal speech features jointly. As a result, the proposed techniques provide us with a manifold framework for robust speech processing under very challenging acoustic environments, thus allowing us to improve perceptual quality and intelligibility measures.es_ES
dc.description.sponsorshipProject PID2019-104206GB-I00 funded by MCIN/AEI/10.13039/501100011033es_ES
dc.language.isoenges_ES
dc.publisherISCA - Iberspeech 2022es_ES
dc.titleOnline Multichannel Speech Enhancement combining Statistical Signal Processing and Deep Neural Networks: A Ph.D. Thesis Overviewes_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
dc.identifier.doi10.21437/IberSPEECH.2022-45
dc.type.hasVersioninfo:eu-repo/semantics/publishedVersiones_ES


Ficheros en el ítem

[PDF]

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem