Crossmodal Semantic Congruence Interacts with Object Contextual Consistency in Complex Visual Scenes to Enhance Short-Term Memory Performance
MetadataShow full item record
AuthorAlmadori, Erika; Mastroberardino, Serena; Botta, Fabiano; Brunetti, Riccardo; Lupiáñez Castillo, Juan; Spence, Charles; Santangelo, Valerio
Short-term memorySemanticsVisual scenesAuditoryMultisensorySpatial orientation
Almadori, E.; Mastroberardino, S.; Botta, F.; Brunetti, R.; Lupiáñez, J.; Spence, C.; Santangelo, V. Crossmodal Semantic Congruence Interacts with Object Contextual Consistency in Complex Visual Scenes to Enhance Short-Term Memory Performance. Brain Sci. 2021, 11, 1206. https://doi.org/10.3390/ brainsci11091206
Object sounds can enhance the attentional selection and perceptual processing of semanticallyrelated visual stimuli. However, it is currently unknown whether crossmodal semantic congruence also affects the post-perceptual stages of information processing, such as short-term memory (STM), and whether this effect is modulated by the object consistency with the background visual scene. In two experiments, participants viewed everyday visual scenes for 500 ms while listening to an object sound, which could either be semantically related to the object that served as the STM target at retrieval or not. This defined crossmodal semantically cued vs. uncued targets. The target was either in- or out-of-context with respect to the background visual scene. After a maintenance period of 2000 ms, the target was presented in isolation against a neutral background, in either the same or different spatial position as in the original scene. The participants judged the same vs. different position of the object and then provided a confidence judgment concerning the certainty of their response. The results revealed greater accuracy when judging the spatial position of targets paired with a semantically congruent object sound at encoding. This crossmodal facilitatory effect was modulated by whether the target object was in- or out-of-context with respect to the background scene, with out-of-context targets reducing the facilitatory effect of object sounds. Overall, these findings suggest that the presence of the object sound at encoding facilitated the selection and processing of the semantically related visual stimuli, but this effect depends on the semantic configuration of the visual scene.