EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: The MonuMAI cultural heritage use case
Metadatos
Afficher la notice complèteAuteur
Díaz Rodríguez, Natalia Ana; Castillo Lamas, Alberto; Sanchez, Jules; Franchi, Gianni; Donadello, Ivan; Tabik, Siham; Filliat, David; Cruz Cabrera, José Policarpo; Montes Soldado, Rosa Ana; Herrera Triguero, FranciscoEditorial
Elsevier
Materia
Explainable artificial intelligence Deep learning Neural-symbolic learning Expert knowledge graphs Compositionality Part-based object detection and classification
Date
2021-10-13Referencia bibliográfica
Natalia Díaz-Rodríguez... [et al.]. EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: The MonuMAI cultural heritage use case, Information Fusion, Volume 79, 2022, Pages 58-83, ISSN 1566-2535, [https://doi.org/10.1016/j.inffus.2021.09.022]
Patrocinador
French National Research Agency (ANR); SEGULA Technologies; Andalusian Excellence project P18FR-4961; Spanish National Project PID2020-119478GB-I00; Spanish Government RYC-201518136; Spanish Government Juan de la Cierva Incorporacion contract IJC2019-039152-IRésumé
The latest Deep Learning (DL) models for detection and classification have achieved an unprecedented
performance over classical machine learning algorithms. However, DL models are black-box methods hard to
debug, interpret, and certify. DL alone cannot provide explanations that can be validated by a non technical
audience such as end-users or domain experts. In contrast, symbolic AI systems that convert concepts into rules
or symbols – such as knowledge graphs – are easier to explain. However, they present lower generalization
and scaling capabilities. A very important challenge is to fuse DL representations with expert knowledge.
One way to address this challenge, as well as the performance-explainability trade-off is by leveraging the
best of both streams without obviating domain expert knowledge. In this paper, we tackle such problem by
considering the symbolic knowledge is expressed in form of a domain expert knowledge graph. We present
the eXplainable Neural-symbolic learning (X-NeSyL) methodology, designed to learn both symbolic and deep
representations, together with an explainability metric to assess the level of alignment of machine and human
expert explanations. The ultimate objective is to fuse DL representations with expert domain knowledge during
the learning process so it serves as a sound basis for explainability. In particular, X-NeSyL methodology involves
the concrete use of two notions of explanation, both at inference and training time respectively: (1) EXPLANet:
Expert-aligned eXplainable Part-based cLAssifier NETwork Architecture, a compositional convolutional neural
network that makes use of symbolic representations, and (2) SHAP-Backprop, an explainable AI-informed
training procedure that corrects and guides the DL process to align with such symbolic representations in
form of knowledge graphs. We showcase X-NeSyL methodology using MonuMAI dataset for monument facade
image classification, and demonstrate that with our approach, it is possible to improve explainability at the
same time as performance.