Integrating when and what information in the left parietal lobe allows language rule generalization
Metadatos
Mostrar el registro completo del ítemEditorial
PUBLIC LIBRARY SCIENCE
Fecha
2020Referencia bibliográfica
Orpella J, Ripollés P, Ruzzoli M, Amengual JL, Callejas A, Martinez-Alvarez A, et al. (2020) Integrating when and what information in the left parietal lobe allows language rule generalization. PLoS Biol 18(11): e3000895. https://doi.org/10.1371/journal.pbio.3000895
Patrocinador
European Research Council (ERC); Spanish Ministerio de Ciencia e Innovacio PRIME RdD-B; European Research Council (ERC) 727595; Juan de la Cierva Post-Doctorate Fellowship JCI-2012-12335; Ministerio de Economia y CompetividadResumen
A crucial aspect when learning a language is discovering the rules that govern how words
are combined in order to convey meanings. Because rules are characterized by sequential
co-occurrences between elements (e.g., “These cupcakes are unbelievable”), tracking the
statistical relationships between these elements is fundamental. However, purely bottom-up
statistical learning alone cannot fully account for the ability to create abstract rule representations that can be generalized, a paramount requirement of linguistic rules. Here, we provide evidence that, after the statistical relations between words have been extracted, the
engagement of goal-directed attention is key to enable rule generalization. Incidental learning performance during a rule-learning task on an artificial language revealed a progressive
shift from statistical learning to goal-directed attention. In addition, and consistent with the
recruitment of attention, functional MRI (fMRI) analyses of late learning stages showed left
parietal activity within a broad bilateral dorsal frontoparietal network. Critically, repetitive
transcranial magnetic stimulation (rTMS) on participants’ peak of activation within the left
parietal cortex impaired their ability to generalize learned rules to a structurally analogous
new language. No stimulation or rTMS on a nonrelevant brain region did not have the same
interfering effect on generalization. Performance on an additional attentional task showed
that this rTMS on the parietal site hindered participants’ ability to integrate “what” (stimulus
identity) and “when” (stimulus timing) information about an expected target. The present
findings suggest that learning rules from speech is a two-stage process: following statistical learning, goal-directed attention—involving left parietal regions—integrates “what” and
“when” stimulus information to facilitate rapid rule generalization.