Online Multichannel Speech Enhancement combining Statistical Signal Processing and Deep Neural Networks: A Ph.D. Thesis Overview Martín Doñas, Juan M. Peinado Herreros, Antonio Miguel Gómez García, Ángel Manuel Speech-related applications on mobile devices require highperformance speech enhancement algorithms to tackle challenging, noisy real-world environments. In addition, current mobile devices often embed several microphones, allowing them to exploit spatial information. The main goal of this Thesis is the development of online multichannel speech enhancement algorithms for speech services in mobile devices. The proposed techniques use multichannel signal processing to increase the noise reduction performance without degrading the quality of the speech signal. Moreover, deep neural networks are applied in specific parts of the algorithm where modeling by classical methods would be, otherwise, unfeasible or very limiting. Our contributions focus on different noisy environments where these mobile speech technologies can be applied. These include dualmicrophone smartphones in noisy and reverberant environments and general multi-microphone devices for speech enhancement and target source separation. Moreover, we study the training of deep learning methods for speech processing using perceptual considerations. Our contributions successfully integrate signal processing and deep learning methods to exploit spectral, spatial, and temporal speech features jointly. As a result, the proposed techniques provide us with a manifold framework for robust speech processing under very challenging acoustic environments, thus allowing us to improve perceptual quality and intelligibility measures. 2024-02-09T07:52:29Z 2024-02-09T07:52:29Z 2022 info:eu-repo/semantics/article https://hdl.handle.net/10481/88765 10.21437/IberSPEECH.2022-45 eng info:eu-repo/semantics/openAccess ISCA - Iberspeech 2022