• English 
    • español
    • English
    • français
  • FacebookPinterestTwitter
  • español
  • English
  • français
View Item 
  •   DIGIBUG Home
  • 1.-Investigación
  • Departamentos, Grupos de Investigación e Institutos
  • Departamento de Teoría de la Señal, Telemática y Comunicaciones
  • DTSTC - Comunicaciones congresos, conferencias, ...
  • View Item
  •   DIGIBUG Home
  • 1.-Investigación
  • Departamentos, Grupos de Investigación e Institutos
  • Departamento de Teoría de la Señal, Telemática y Comunicaciones
  • DTSTC - Comunicaciones congresos, conferencias, ...
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Online Multichannel Speech Enhancement combining Statistical Signal Processing and Deep Neural Networks: A Ph.D. Thesis Overview

[PDF] martindonas22b_iberspeech.pdf (1.094Mb)
Identificadores
URI: https://hdl.handle.net/10481/88765
DOI: 10.21437/IberSPEECH.2022-45
Exportar
RISRefworksMendeleyBibtex
Estadísticas
View Usage Statistics
Metadata
Show full item record
Author
Martín Doñas, Juan M.; Peinado Herreros, Antonio Miguel; Gómez García, Ángel Manuel
Editorial
ISCA - Iberspeech 2022
Date
2022
Sponsorship
Project PID2019-104206GB-I00 funded by MCIN/AEI/10.13039/501100011033
Abstract
Speech-related applications on mobile devices require highperformance speech enhancement algorithms to tackle challenging, noisy real-world environments. In addition, current mobile devices often embed several microphones, allowing them to exploit spatial information. The main goal of this Thesis is the development of online multichannel speech enhancement algorithms for speech services in mobile devices. The proposed techniques use multichannel signal processing to increase the noise reduction performance without degrading the quality of the speech signal. Moreover, deep neural networks are applied in specific parts of the algorithm where modeling by classical methods would be, otherwise, unfeasible or very limiting. Our contributions focus on different noisy environments where these mobile speech technologies can be applied. These include dualmicrophone smartphones in noisy and reverberant environments and general multi-microphone devices for speech enhancement and target source separation. Moreover, we study the training of deep learning methods for speech processing using perceptual considerations. Our contributions successfully integrate signal processing and deep learning methods to exploit spectral, spatial, and temporal speech features jointly. As a result, the proposed techniques provide us with a manifold framework for robust speech processing under very challenging acoustic environments, thus allowing us to improve perceptual quality and intelligibility measures.
Collections
  • DTSTC - Comunicaciones congresos, conferencias, ...

My Account

LoginRegister

Browse

All of DIGIBUGCommunities and CollectionsBy Issue DateAuthorsTitlesSubjectFinanciaciónAuthor profilesThis CollectionBy Issue DateAuthorsTitlesSubjectFinanciación

Statistics

View Usage Statistics

Servicios

Pasos para autoarchivoAyudaLicencias Creative CommonsSHERPA/RoMEODulcinea Biblioteca UniversitariaNos puedes encontrar a través deCondiciones legales

Contact Us | Send Feedback