Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
Identificadores
URI: https://hdl.handle.net/10481/77982Metadatos
Mostrar el registro completo del ítemAutor
Barredo Arrieta, Alejandro; Tabik, Siham; García López, Salvador; Molina Cabrera, Daniel; Herrera Triguero, Francisco; Díaz Rodríguez, Natalia AnaEditorial
Elsevier
Materia
Explainable artificial intelligence Machine learning Deep learning Data fusion Interpretability Comprehensibility Transparency Privacy Fairness Accountability Responsible Artificial Intelligence Inteligencia artificial Artificial intelligence
Fecha
2019-12-25Referencia bibliográfica
Published version: Alejandro Barredo Arrieta... [et al.]. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI, Information Fusion, Volume 58, 2020, Pages 82-115, ISSN 1566-2535, [https://doi.org/10.1016/j.inffus.2019.12.012]
Patrocinador
Basque Government; Consolidated Research Group MATHMODE - Department of Education of the Basque Government IT1294-19; Spanish Government; European Commission TIN2017-89517-P; BBVA Foundation through its Ayudas Fundacion BBVA a Equipos de Investigacion Cientifica 2018 call (DeepSCOP project); European Commission 825619Resumen
In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed
appropriately, may deliver the best of expectations over many application sectors across the field. For this
to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability,
an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural
Networks) that were not present in the last hype of AI (namely, expert systems and rule based models).
Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely
acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in
this article examines the existing literature and contributions already done in the field of XAI, including a
prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define
explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that
covers such prior conceptual propositions with a major focus on the audience for which the explainability
is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions
related to the explainability of different Machine Learning models, including those aimed at explaining
Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This
critical literature analysis serves as the motivating background for a series of challenges faced by XAI,
such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept
of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI
methods in real organizations with fairness, model explainability and accountability at its core. Our
ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve
as reference material in order to stimulate future research advances, but also to encourage experts and
professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any
prior bias for its lack of interpretability.
Excepto si se señala otra cosa, la licencia del ítem se describe como Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Ítems relacionados
Mostrando ítems relacionados por Título, autor o materia.