Mostrar el registro sencillo del ítem

dc.contributor.authorBennetot, Adrien
dc.contributor.authorFranchi, Gianni
dc.contributor.authorDel Ser, Javier
dc.contributor.authorChatila, Raja
dc.contributor.authorDíaz Rodríguez, Natalia Ana 
dc.date.accessioned2022-12-09T09:54:06Z
dc.date.available2022-12-09T09:54:06Z
dc.date.issued2022-09-26
dc.identifier.citationPublished version: Adrien Bennetot... [et al.]. Greybox XAI: A Neural-Symbolic learning framework to produce interpretable predictions for image classification, Knowledge-Based Systems, Volume 258, 2022, 109947, ISSN 0950-7051, [https://doi.org/10.1016/j.knosys.2022.109947]es_ES
dc.identifier.urihttps://hdl.handle.net/10481/78357
dc.description.abstractAlthough Deep Neural Networks (DNNs) have great generalization and prediction capabilities, their functioning does not allow a detailed explanation of their behavior. Opaque deep learning models are increasingly used to make important predictions in critical environments, and the danger is that they make and use predictions that cannot be justified or legitimized. Several eXplainable Artificial Intelligence (XAI) methods that separate explanations from machine learning models have emerged, but have shortcomings in faithfulness to the model actual functioning and robustness. As a result, there is a widespread agreement on the importance of endowing Deep Learning models with explanatory capabilities so that they can themselves provide an answer to why a particular prediction was made. First, we address the problem of the lack of universal criteria for XAI by formalizing what an explanation is. We also introduced a set of axioms and definitions to clarify XAI from a mathematical perspective. Finally, we present the Greybox XAI, a framework that composes a DNN and a transparent model thanks to the use of a symbolic Knowledge Base (KB). We extract a KB from the dataset and use it to train a transparent model (i.e., a logistic regression). An encoder-decoder architecture is trained on RGB images to produce an output similar to the KB used by the transparent model. Once the two models are trained independently, they are used compositionally to form an explainable predictive model. We show how this new architecture is accurate and explainable in several datasets.es_ES
dc.description.sponsorshipFrench ANRT (AssociationNationale Recherche Technologie - ANRT)es_ES
dc.description.sponsorshipSEGULA Technologieses_ES
dc.description.sponsorshipJuan de la Cierva Incorporacion grant - MCIN/AEI by "ESF Investing in your future" I JC2019-039152-Ies_ES
dc.description.sponsorshipGoogle Research Scholar Programes_ES
dc.description.sponsorshipDepartment of Education of the Basque Government (Consolidated Research Group MATHMODE) IT1456-22es_ES
dc.language.isoenges_ES
dc.publisherElsevieres_ES
dc.rightsAtribución 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectExplainable artificial intelligencees_ES
dc.subjectComputer visiones_ES
dc.subjectDeep learninges_ES
dc.subjectPart-based Object Classificationes_ES
dc.subjectCompositional modelses_ES
dc.subjectNeural-symbolic learning and reasoninges_ES
dc.titleGreybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classificationes_ES
dc.typejournal articlees_ES
dc.rights.accessRightsopen accesses_ES
dc.type.hasVersionSMURes_ES


Ficheros en el ítem

[PDF]

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Atribución 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como Atribución 4.0 Internacional