Mostrar el registro sencillo del ítem

dc.contributor.authorDéniz Cerpa, José Daniel 
dc.contributor.authorFermüller, Cornelia
dc.contributor.authorRos Vidal, Eduardo 
dc.contributor.authorRodríguez Álvarez, Manuel 
dc.contributor.authorBarranco Expósito, Francisco 
dc.date.accessioned2026-03-04T07:34:42Z
dc.date.available2026-03-04T07:34:42Z
dc.date.issued2023-07-26
dc.identifier.citationDaniel Deniz, Cornelia Fermuller, Eduardo Ros, Manuel Rodriguez-Alvarez, Francisco Barranco. Event-based Vision for Early Prediction of Manipulation Actions. DOI: https://doi.org/10.48550/arXiv.2307.14332es_ES
dc.identifier.urihttps://hdl.handle.net/10481/111859
dc.descriptionThis work was supported by the Spanish National Grant PID2019-109434RA-I00/ SRA (State Research Agency /10.13039/501100011033). We acknowledge the Telluride Neuromorphic Cognition Engineering Workshop (http: //www.ine-web.org), supported by NSF grant OISE 2020624 for the fruitful discussions on neuromorphic cognition and their participants for helping with the recording of the dataset.es_ES
dc.description.abstractNeuromorphic visual sensors are artificial retinas that output sequences of asynchronous events when brightness changes occur in the scene. These sensors offer many advantages including very high temporal resolution, no motion blur and smart data compression ideal for real-time processing. In this study, we introduce an event-based dataset on fine-grained manipulation actions and perform an experimental study on the use of transformers for action prediction with events. There is enormous interest in the fields of cognitive robotics and human-robot interaction on understanding and predicting human actions as early as possible. Early prediction allows anticipating complex stages for planning, enabling effective and real-time interaction. Our Transformer network uses events to predict manipulation actions as they occur, using online inference. The model succeeds at predicting actions early on, building up confidence over time and achieving state-of-the-art classification. Moreover, the attention-based transformer architecture allows us to study the role of the spatio-temporal patterns selected by the model. Our experiments show that the Transformer network captures action dynamic features outperforming video-based approaches and succeeding with scenarios where the differences between actions lie in very subtle cues. Finally, we release the new event dataset, which is the first in the literature for manipulation action recognition.es_ES
dc.description.sponsorshipSpanish National Grant PID2019-109434RA-I00/ SRAes_ES
dc.description.sponsorshipNSF OISE 2020624es_ES
dc.language.isoenges_ES
dc.publisherCornell Universityes_ES
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectEvent-based visiones_ES
dc.subjectOnline predictiones_ES
dc.subjectManipulation action predictiones_ES
dc.titleEvent-based Vision for Early Prediction of Manipulation Actionses_ES
dc.typeconference outputes_ES
dc.rights.accessRightsopen accesses_ES
dc.identifier.doi10.48550/arXiv.2307.14332
dc.type.hasVersionSMURes_ES


Ficheros en el ítem

[PDF]

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como Attribution-NonCommercial-NoDerivatives 4.0 Internacional