Online Multichannel Speech Enhancement Based on Recursive EM and DNN-Based Speech Presence Estimation
Metadatos
Mostrar el registro completo del ítemAutor
Martín Doñas, Juan M.; Jensen, Jesper; Tan, Zheng-Hua; Gómez García, Ángel Manuel; Peinado Herreros, Antonio MiguelEditorial
IEEE
Materia
Deep learning (DL) Speech enhancement beamforming
Fecha
2020-11-09Referencia bibliográfica
Martín-Doñas, J. M., Jensen, J., Tan, Z. H., Gomez, A. M., & Peinado, A. M. (2020). Online Multichannel Speech Enhancement Based on Recursive EM and DNN-Based Speech Presence Estimation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 28, 3080-3094.
Patrocinador
Spanish MICINN/FEDER (Grant Number: PID2019-104206GB-I00); Spanish Ministry of Universities National Program FPU (Grant Number: FPU15/04161)Resumen
This article presents a recursive expectation-maximization algorithm for online multichannel speech enhancement. A deep neural network mask estimator is used to compute the speech presence probability, which is then improved by means of statistical spatial models of the noisy speech and noise signals. The clean speech signal is estimated using beamforming, single-channel linear postfiltering and speech presence masking. The clean speech statistics and speech presence probabilities are finally used to compute the acoustic parameters for beamforming and postfiltering by means of maximum likelihood estimation. This iterative procedure is carried out on a frame-by-frame basis. The algorithm integrates the different estimates in a common statistical framework suitable for online scenarios. Moreover, our method can successfully exploit spectral, spatial and temporal speech properties. Our proposed algorithm is tested in different noisy environments using the multichannel recordings of the CHiME-4 database. The experimental results show that our method outperforms other related state-of-the-art approaches in noise reduction performance, while allowing low-latency processing for real-time applications.