Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification
Identificadores
URI: https://hdl.handle.net/10481/78357Metadata
Show full item recordAuthor
Bennetot, Adrien; Franchi, Gianni; Del Ser, Javier; Chatila, Raja; Díaz Rodríguez, Natalia AnaEditorial
Elsevier
Materia
Explainable artificial intelligence Computer vision Deep learning Part-based Object Classification Compositional models Neural-symbolic learning and reasoning
Date
2022-09-26Referencia bibliográfica
Published version: Adrien Bennetot... [et al.]. Greybox XAI: A Neural-Symbolic learning framework to produce interpretable predictions for image classification, Knowledge-Based Systems, Volume 258, 2022, 109947, ISSN 0950-7051, [https://doi.org/10.1016/j.knosys.2022.109947]
Sponsorship
French ANRT (AssociationNationale Recherche Technologie - ANRT); SEGULA Technologies; Juan de la Cierva Incorporacion grant - MCIN/AEI by "ESF Investing in your future" I JC2019-039152-I; Google Research Scholar Program; Department of Education of the Basque Government (Consolidated Research Group MATHMODE) IT1456-22Abstract
Although Deep Neural Networks (DNNs) have great generalization and prediction capabilities, their
functioning does not allow a detailed explanation of their behavior. Opaque deep learning models are
increasingly used to make important predictions in critical environments, and the danger is that they make
and use predictions that cannot be justified or legitimized. Several eXplainable Artificial Intelligence (XAI)
methods that separate explanations from machine learning models have emerged, but have shortcomings
in faithfulness to the model actual functioning and robustness. As a result, there is a widespread agreement
on the importance of endowing Deep Learning models with explanatory capabilities so that they can
themselves provide an answer to why a particular prediction was made. First, we address the problem
of the lack of universal criteria for XAI by formalizing what an explanation is. We also introduced a
set of axioms and definitions to clarify XAI from a mathematical perspective. Finally, we present the
Greybox XAI, a framework that composes a DNN and a transparent model thanks to the use of a symbolic
Knowledge Base (KB). We extract a KB from the dataset and use it to train a transparent model (i.e., a
logistic regression). An encoder-decoder architecture is trained on RGB images to produce an output
similar to the KB used by the transparent model. Once the two models are trained independently, they
are used compositionally to form an explainable predictive model. We show how this new architecture is
accurate and explainable in several datasets.