Rapid Computation of High-Level Visual Surprise
Metadatos
Mostrar el registro completo del ítemEditorial
Elsevier
Fecha
2025-11-19Referencia bibliográfica
Richter, D., Pena, P., Ruz, M., Rapid Computation of High-Level Visual Surprise, iScience (2025), doi: https://doi.org/10.1016/j.isci.2025.114121
Patrocinador
MCIN/EI/10.13039/501100011033 and FEDER (Marie Skłodowska Curie Project 101147241; PID2022.138940NB.I00); MCIN/AEI/10.13039/501100011033 (CEX2023-001312-M); University of Granada (UCE-PP2023-11)Resumen
Predictive processing theories propose that the brain continuously generates expectations about incoming sensory information. Discrepancies between these predictions and actual inputs, sensory prediction errors, guide perceptual inference. A fundamental yet largely unresolved question is which stimulus features the brain predicts, and therefore, what kind of surprise drives neural responses. Here, we investigated this question using electroencephalography (EEG) and computational modelling based on deep neural networks (DNNs). Participants viewed object images whose identity was probabilistically predicted by preceding cues. We then quantified trial-by-trial surprise at both low-level (early DNN layers) and high-level (late DNN layers) visual feature representations. Results showed that stimulus-evoked responses around 200ms post-stimulus onset over parieto-occipital electrodes were increased by high-level, but not by low-level visual surprise. These findings suggest that high-level visual predictions rapidly inform perceptual inference, indicating that the brain's predictive machinery may be finely tuned to utilize expectations abstracted away from low-level sensory details to facilitate perception.





