NeuroIncept Decoder for High-Fidelity Speech Reconstruction from Neural Activity Khanday, Owais Mujtaba Pérez Córdoba, José Luis Mir, Mohd Yaqub Najar, Ashfaq Ahmad González López, José Andrés Brain-computer interfaces speech synthesis deep neural networks EEG This paper introduces a novel algorithm designed for speech synthesis from neural activity recordings obtained using invasive electroencephalography (EEG) techniques. The proposed system offers a promising communication solution for individuals with severe speech impairments. Central to our approach is the integration of time-frequency features in the high-gamma band computed from EEG recordings with an advanced NeuroIncept Decoder architecture. This neural network architecture combines Convolutional Neural Networks (CNNs) and Gated Recurrent Units (GRUs) to reconstruct audio spectrograms from neural patterns. Our model demonstrates robust mean correlation coefficients between predicted and actual spectrograms, though inter-subject variability indicates distinct neural processing mechanisms among participants. Overall, our study highlights the potential of neural decoding techniques to restore communicative abilities in individuals with speech disorders and paves the way for future advancements in brain-computer interface technologies. 2025-09-02T07:07:01Z 2025-09-02T07:07:01Z 2025-04-06 preprint Khanday, O. M., Pérez-Córdoba, J. L., Mir, M. Y., Najar, A. A., & Gonzalez-Lopez, J. A. (2025, April). NeuroIncept Decoder for High-Fidelity Speech Reconstruction from Neural Activity. In ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1-5). IEEE. https://hdl.handle.net/10481/105969 10.1109/ICASSP49660.2025.10888547 eng http://creativecommons.org/licenses/by-nc-sa/4.0/ open access Atribución-NoComercial-CompartirIgual 4.0 Internacional IEEE