Decoding the Mind: Neural Differences and Semantic Representation in Perception and Imagination Across Modalities
Metadatos
Mostrar el registro completo del ítemAutor
Khanday, Owais Mujtaba; Ouellet, Marc; Pérez Córdoba, José Luis; Sbaih, Asma Hasan; Miccoli, Laura; González López, José AndrésEditorial
Internation Speech Communication Association (ISCA)
Fecha
2024-11-11Referencia bibliográfica
Khanday, O.M., Ouellet, M., Pérez-Córdoba, J.L., Sbaih, A.H., Miccoli, L., Gonzalez-Lopez, J.A. (2024) Decoding the Mind: Neural Differences and Semantic Representation in Perception and Imagination Across Modalities. Proc. IberSPEECH 2024, 126-130, doi: 10.21437/IberSPEECH.2024-26
Patrocinador
MICIU/AEI/10.13039/501100011033 PID2022-141378OB-C22; ERDF/EUResumen
This study undertakes an analysis of neural signals related to perception and imagination concepts, aiming to enhance communication capabalities for individuals with speech impairments. The investigation utilizes publicly available Electroencephalography( EEG) data acquired through a 124-channel ANT Neuro eego Mylab EEG system (ANT Neuro B.V., Hengelo, Netherlands). The dataset includes 11,554 trials from 12 participants. The proposed convolutional neural network (CNN) model outperformed others in classifying the EEG data as being from the perception or the imagined speech task conditions, achieving a test accuracy of 77.89%. Traditional machine learning models, including Random Forest(RF), Support Vector Classifier (SVC), and XGBoost, showed tendencies to overfit, resulting in low accuracies. as for the semantic decoding, unfortunately, the different models performed at the chance level.