Mostrar el registro sencillo del ítem

dc.contributor.authorBello García, Marilyn
dc.contributor.authorCosta, Pablo
dc.contributor.authorNápoles, Gonzalo
dc.contributor.authorMesejo Santiago, Pablo 
dc.contributor.authorCordón García, Óscar 
dc.date.accessioned2024-06-20T08:20:13Z
dc.date.available2024-06-20T08:20:13Z
dc.date.issued2024-04-05
dc.identifier.citationBello, Marilyn, et al. ExplainLFS: Explaining neural architectures for similarity learning from local perturbations in the latent feature space. Information Fusion 108 (2024) 102407 [10.1016/j.inffus.2024.102407]es_ES
dc.identifier.urihttps://hdl.handle.net/10481/92717
dc.description.abstractDespite the increasing development in recent years of explainability techniques for deep neural networks, only some are dedicated to explaining the decisions made by neural networks for similarity learning. While existing approaches can explain classification models, their adaptation to generate visual similarity explanations is not trivial. Neural architectures devoted to this task learn an embedding that maps similar examples to nearby vectors and non-similar examples to distant vectors in the feature space. In this paper, we propose a post-hoc agnostic technique that explains the inference of such architectures on a pair of images. The proposed method establishes a relation between the most important features of the abstract feature space and the input feature space (pixels) of an image. For this purpose, we employ a relevance assignment and a perturbation process based on the most influential latent features in the inference. Then, a reconstruction process of the images of the pair is carried out from the perturbed embedding vectors. This process relates the latent features to the original input features. The results indicate that our method produces ‘‘continuous’’ and ‘‘selective’’ explanations. A sharp drop in the value of the function (summarized by a low value of the area under the curve) indicates its superiority over other explainability approaches when identifying features relevant to similarity learning. In addition, we demonstrate that our technique is agnostic to the specific type of similarity model, e.g., we show its applicability in two similarity learning tasks: face recognition and image retrieval.es_ES
dc.description.sponsorshipR&D project CONFIA (PID2021-122916NB-I00), funded by MICIU/AEI/10.13039/501100011033/ and FEDER, EUes_ES
dc.description.sponsorshipFunding for open access charges is covered by Universidad de Granada / CBUAes_ES
dc.language.isoenges_ES
dc.publisherElsevieres_ES
dc.rightsAtribución-NoComercial 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc/4.0/*
dc.subjectSimilarity learning networkses_ES
dc.subjectFace recognitiones_ES
dc.subjectImage retrievales_ES
dc.titleExplainLFS: Explaining neural architectures for similarity learning from local perturbations in the latent feature spacees_ES
dc.typejournal articlees_ES
dc.rights.accessRightsopen accesses_ES
dc.identifier.doi10.1016/j.inffus.2024.102407
dc.type.hasVersionVoRes_ES


Ficheros en el ítem

[PDF]

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Atribución-NoComercial 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como Atribución-NoComercial 4.0 Internacional