Mostrar el registro sencillo del ítem

dc.contributor.authorMauro, Gianfranco
dc.contributor.authorPegalajar Cuéllar, Manuel 
dc.contributor.authorMorales Santos, Diego Pedro 
dc.date.accessioned2022-04-18T09:49:46Z
dc.date.available2022-04-18T09:49:46Z
dc.date.issued2022-02-28
dc.identifier.citationG. Mauro... [et al.]. "Few-Shot User-Definable Radar-Based Hand Gesture Recognition at the Edge," in IEEE Access, vol. 10, pp. 29741-29759, 2022, doi: [10.1109/ACCESS.2022.3155124]es_ES
dc.identifier.urihttp://hdl.handle.net/10481/74318
dc.descriptionThis work was supported in part by ITEA3 Unleash Potentials in Simulation (UPSIM) by the German Federal Ministry of Education and Research (BMBF) under Project 19006, in part by the Austrian Research Promotion Agency (FFG), in part by the Rijksdienst voor Ondernemend Nederland (Rvo), and in part by the Innovation Fund Denmark (IFD).es_ES
dc.description.abstractTechnological advances and scalability are leading Human-Computer Interaction (HCI) to evolve towards intuitive forms, such as through gesture recognition. Among the various interaction strategies, radar-based recognition is emerging as a touchless, privacy-secure, and versatile solution in different environmental conditions. Classical radar-based gesture HCI solutions involve deep learning but require training on large and varied datasets to achieve robust prediction. Innovative self-learning algorithms can help tackling this problem by recognizing patterns and adapt from similar contexts. Yet, such approaches are often computationally expensive and hardly integrable into hardware-constrained solutions. In this paper, we present a gesture recognition algorithm which is easily adaptable to new users and contexts. We exploit an optimization-based meta-learning approach to enable gesture recognition in learning sequences. This method targets at learning the best possible initialization of the model parameters, simplifying training on new contexts when small amounts of data are available. The reduction in computational cost is achieved by processing the radar sensed data of gestures in the form of time maps, to minimize the input data size. This approach enables the adaptation of simple convolutional neural network (CNN) to new hand poses, thus easing the integration of the model into a hardware-constrained platform. Moreover, the use of a Variational Autoencoders (VAE) to reduce the gestures' dimensionality leads to a model size decrease of an order of magnitude and to half of the required adaptation time. The proposed framework, deployed on the Intel(R) Neural Compute Stick 2 (NCS 2), leads to an average accuracy of around 84% for unseen gestures when only one example per class is utilized at training time. The accuracy increases up to 92.6% and 94.2% when three and five samples per class are used.es_ES
dc.description.sponsorshipFederal Ministry of Education & Research (BMBF) 19006es_ES
dc.description.sponsorshipAustrian Research Promotion Agency (FFG)es_ES
dc.description.sponsorshipRijksdienst voor Ondernemend Nederland (Rvo)es_ES
dc.description.sponsorshipInnovation Fund Denmark (IFD)es_ES
dc.language.isoenges_ES
dc.publisherIEEEes_ES
dc.rightsAtribución 3.0 España*
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/es/*
dc.subjectArtificial neural networkses_ES
dc.subjectEdge computinges_ES
dc.subjectFMCWes_ES
dc.subjectIntel neural compute stickes_ES
dc.subjectKnowledge transferes_ES
dc.subjectMeta learninges_ES
dc.subjectHuman computer interactiones_ES
dc.subjectRadar es_ES
dc.subjectVariational autoencoderes_ES
dc.titleFew-Shot User-Definable Radar-Based Hand Gesture Recognition at the Edgees_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
dc.identifier.doi10.1109/ACCESS.2022.3155124
dc.type.hasVersioninfo:eu-repo/semantics/publishedVersiones_ES


Ficheros en el ítem

[PDF]

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Atribución 3.0 España
Excepto si se señala otra cosa, la licencia del ítem se describe como Atribución 3.0 España