Mostrar el registro sencillo del ítem

dc.contributor.authorVales Cortina, Ibon
dc.contributor.authorKhanday, Owais Mujtaba
dc.contributor.authorOuellet, Marc 
dc.contributor.authorPérez Córdoba, José Luis 
dc.contributor.authorRodríguez San Esteban, Pablo
dc.contributor.authorMiccoli, Laura 
dc.contributor.authorGaldón Castillo, Alberto
dc.contributor.authorOlivares Granados, Gonzalo
dc.contributor.authorGonzález López, José Andrés 
dc.date.accessioned2026-02-26T07:39:58Z
dc.date.available2026-02-26T07:39:58Z
dc.date.issued2025-10
dc.identifier.urihttps://hdl.handle.net/10481/111524
dc.description.abstractWe present UGR-MINDVOICE, the University of Granada (UGR) multimodal electroencephalography (EEG) and audio dataset for overt and covert speech in Iberian Spanish intended for basic neuroscience and brain-computer interface (BCI) research. The dataset features EEG and audio recordings from 15 native Spanish speakers engaged in both overt and covert speech production tasks. This dataset is unique in its inclusion of all Spanish phonemes and a diverse set of words spanning various semantic categories and different usage frequencies. Validation of the dataset confirmed the presence of robust sensory event-related potentials, including the visual P100 and the auditory N1 (N100), indicating reliable early perceptual processing and sustained participant attention to both visual and auditory stimuli. Additionally, the EEG data were classified into rest, covert speech, and overt speech conditions with an accuracy of 81.40%, demonstrating active participant engagement in the tasks. By providing synchronised EEG and audio data for overt speech, along with EEG data for the same stimuli during covert speech, UGR-MINDVOICE constitutes a valuable resource for advancing research in basic neuroscience and brain-computer interfaces, particularly in the domain of silent speech communication. The full dataset is openly available on the Open Science Framework (OSF) (https://osf.io/6sh5d), and all accompanying code and analysis scripts are provided in a public GitHub repository (https://github.com/owaismujtaba/mind-voice)es_ES
dc.description.sponsorshipThis work was supported by grant PID2022-141378OB-C22, funded by MICIU/AEI/10.13039/501100011033 and by ERDF/EU .es_ES
dc.language.isoenges_ES
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.titleUGR-MINDVOICE: A multimodal EEG-audio dataset for overt and covert Iberian Spanish speech productiones_ES
dc.typejournal articlees_ES
dc.rights.accessRightsopen accesses_ES
dc.identifier.doi10.1016/j.csl.2026.101964
dc.type.hasVersionAMes_ES


Ficheros en el ítem

[PDF]

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como Attribution-NonCommercial-NoDerivatives 4.0 Internacional