NeuroIncept Decoder for High-Fidelity Speech Reconstruction from Neural Activity
Metadatos
Mostrar el registro completo del ítemAutor
Khanday, Owais Mujtaba; Pérez Córdoba, José Luis; Mir, Mohd Yaqub; Najar, Ashfaq Ahmad; González López, José AndrésEditorial
IEEE
Materia
Brain-computer interfaces speech synthesis deep neural networks EEG
Fecha
2025-04-06Referencia bibliográfica
Khanday, O. M., Pérez-Córdoba, J. L., Mir, M. Y., Najar, A. A., & Gonzalez-Lopez, J. A. (2025, April). NeuroIncept Decoder for High-Fidelity Speech Reconstruction from Neural Activity. In ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1-5). IEEE.
Patrocinador
This work was supported by grant PID2022-141378OB-C22 funded by MICIU/AEI/10.13039/501100011033 and by ERDF/EUResumen
This paper introduces a novel algorithm designed for speech synthesis from neural activity recordings obtained using invasive electroencephalography (EEG) techniques. The proposed system offers a promising communication solution for individuals with severe speech impairments. Central to our approach is the integration of time-frequency features in the high-gamma band computed from EEG recordings with an advanced NeuroIncept Decoder architecture. This neural network architecture combines Convolutional Neural Networks (CNNs) and Gated Recurrent Units (GRUs) to reconstruct audio spectrograms from neural patterns. Our model demonstrates robust mean correlation coefficients between predicted and actual spectrograms, though inter-subject variability indicates distinct neural processing mechanisms among participants. Overall, our study highlights the potential of neural decoding techniques to restore communicative abilities in individuals with speech disorders and paves the way for future advancements in brain-computer interface technologies.





