Mostrar el registro sencillo del ítem

dc.contributor.authorBello García, Marilyn
dc.contributor.authorMesejo Santiago, Pablo 
dc.contributor.authorCordón García, Óscar 
dc.date.accessioned2023-11-21T12:47:57Z
dc.date.available2023-11-21T12:47:57Z
dc.date.issued2024-01
dc.identifier.citationBello, G. Nápoles, L. Concepción et al. REPROT: Explaining the predictions of complex deep learning architectures for object detection through reducts of an image. Information Sciences 654 (2024) 119851. [https://doi.org/10.1016/j.ins.2023.119851]es_ES
dc.identifier.urihttps://hdl.handle.net/10481/85813
dc.descriptionResearch funded by MCIN/AEI/10.13039/501100011033/ and FEDER “Una manera de hacer Europa” under grant CONFIA (PID2021-122916NB-I00).es_ES
dc.description.abstractAlthough deep learning models can solve complex prediction problems, they have been criticized for being ‘black boxes’. This implies that their decisions are difficult, if not impossible, to explain by simply inspecting their internal knowledge structures. Explainable Artificial Intelligence has attempted to open the black-box through model-specific and agnostic post-hoc methods that generate visualizations or derive associations between the problem features and the model predictions. This paper proposes a new method, termed REPROT, that explains the decisions of complex deep learning architectures based on local reducts of an image. A ‘reduct’ is a set of sufficiently descriptive features that can fully characterize the acquired knowledge. The created reducts are used to build a ‘prototype image’ that visually explains the inference obtained by a black-box model for an image. We focus on deep learning architectures whose complexity and internal particularities demand adapting existing model-specific explanation methods, making the explanation process more difficult. Experimental results show that the black-box model can detect an object using the prototype image generated from the reduct. Hence, the explanations will be given by “the minimum set of features sufficient for the neural model to detect an object”. The confidence scores obtained by architectures such as Inception, Yolo, and Mask R-CNN are higher for prototype images built from the reduct than those built from the most important superpixels according to the LIME method. Moreover, the target object is not detected on several occasions through the LIME output, thus supporting the superiority of the proposed explanation method.es_ES
dc.description.sponsorshipMCIN/AEI/10.13039/501100011033/es_ES
dc.description.sponsorshipFEDER PID2021-122916NB-I00es_ES
dc.language.isoenges_ES
dc.publisherElsevieres_ES
dc.rightsAtribución 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectDeep learninges_ES
dc.subjectVisual explanationes_ES
dc.subjectRough set theoryes_ES
dc.subjectReductes_ES
dc.subjectPrototype imagees_ES
dc.titleREPROT: Explaining the predictions of complex deep learning architectures for object detection through reducts of an imagees_ES
dc.typejournal articlees_ES
dc.rights.accessRightsopen accesses_ES
dc.identifier.doi10.1016/j.ins.2023.119851
dc.type.hasVersionVoRes_ES


Ficheros en el ítem

[PDF]

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Atribución 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como Atribución 4.0 Internacional