dc.contributor.author | Cuadrado, Javier | |
dc.contributor.author | Rançon, Ulysse | |
dc.contributor.author | Cottereau, Benoit R. | |
dc.contributor.author | Barranco Expósito, Francisco | |
dc.contributor.author | Masquelier, Timothée | |
dc.date.accessioned | 2024-10-28T09:49:44Z | |
dc.date.available | 2024-10-28T09:49:44Z | |
dc.date.issued | 2023-05-11 | |
dc.identifier.citation | Cuadrado J, Rançon U, Cottereau BR, Barranco F and Masquelier T (2023) Optical flow estimation from event-based cameras and spiking neural networks. Front. Neurosci. 17:1160034. doi: 10.3389/fnins.2023.1160034 | es_ES |
dc.identifier.uri | https://hdl.handle.net/10481/96382 | |
dc.description | This research was supported in part by the Agence Nationale
de la Recherche under Grant ANR-20-CE23-0004-04 DeepSee,
by the Spanish National Grant PID2019-109434RA-I00/ SRA
(State Research Agency /10.13039/501100011033), by a FLAG-ERA
funding (Joint Transnational Call 2019, project DOMINO), and by
the Program DesCartes and by the National Research Foundation,
Prime Minister’s Office, Singapore under its Campus for Research
Excellence and Technological Enterprise (CREATE) Program. | es_ES |
dc.description | The Supplementary Material for this article can be found
online at: https://www.frontiersin.org/articles/10.3389/fnins.2023.
1160034/full#supplementary-material | es_ES |
dc.description.abstract | Event-based cameras are raising interest within the computer vision community.
These sensors operate with asynchronous pixels, emitting events, or “spikes”,
when the luminance change at a given pixel since the last event surpasses a
certain threshold. Thanks to their inherent qualities, such as their low power
consumption, low latency, and high dynamic range, they seem particularly tailored
to applications with challenging temporal constraints and safety requirements.
Event-based sensors are an excellent fit for Spiking Neural Networks (SNNs), since
the coupling of an asynchronous sensor with neuromorphic hardware can yield
real-time systems with minimal power requirements. In this work, we seek to
develop one such system, using both event sensor data from the DSEC dataset
and spiking neural networks to estimate optical flow for driving scenarios. We
propose a U-Net-like SNN which, after supervised training, is able to make dense
optical flow estimations. To do so, we encourage both minimal norm for the
error vector and minimal angle between ground-truth and predicted flow, training
our model with back-propagation using a surrogate gradient. In addition, the
use of 3d convolutions allows us to capture the dynamic nature of the data by
increasing the temporal receptive fields. Upsampling after each decoding stage
ensures that each decoder’s output contributes to the final estimation. Thanks
to separable convolutions, we have been able to develop a light model (when
compared to competitors) that can nonetheless yield reasonably accurate optical
flow estimates. | es_ES |
dc.description.sponsorship | Agence Nationale de la Recherche ANR-20-CE23-0004-04 DeepSee | es_ES |
dc.description.sponsorship | Spanish National Grant PID2019-109434RA-I00/ SRA | es_ES |
dc.description.sponsorship | FLAG-ERA project DOMINO | es_ES |
dc.description.sponsorship | Program DesCartes | es_ES |
dc.description.sponsorship | National Research Foundation, Prime Minister’s Office, Singapore | es_ES |
dc.language.iso | eng | es_ES |
dc.publisher | Frontiers | es_ES |
dc.rights | Atribución 4.0 Internacional | * |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | * |
dc.subject | Optical flow | es_ES |
dc.subject | Event vision | es_ES |
dc.subject | Spiking neural network | es_ES |
dc.subject | Neuromorphic computing | es_ES |
dc.subject | Edge AI | es_ES |
dc.title | Optical flow estimation from event-based cameras and spiking neural networks | es_ES |
dc.type | journal article | es_ES |
dc.rights.accessRights | open access | es_ES |
dc.identifier.doi | 10.3389/fnins.2023.1160034 | |
dc.type.hasVersion | VoR | es_ES |