A Practical Tutorial on Explainable AI Techniques
Metadatos
Mostrar el registro completo del ítemAutor
Bennetot, Adrien; Donadello, Ivan; El Qadi El Haouari, Ayoub; Dragoni, Mauro; Frossard, Thomas; Wagner, Benedikt; Sarranti, Anna; Tulli, Silivia; Trocan, Maria; Holzinger, Andreas; d´Ávila Garcez, Artur; Díaz Rodríguez, Natalia AnaEditorial
Association for Computing Machinery
Materia
Computer systems organization Redundancy Robotics Network reliability
Fecha
2024-11-07Referencia bibliográfica
Bennetot, A. et. al. Article No.: 50, Pages 1 - 44. [https://doi.org/10.1145/3670685]
Patrocinador
Juan de la Cierva Incorporación; Austrian Science Fund (FWF), Project: P-32554; Juan de la Cierva Incorporación grant IJC2019-039152-I funded by MCIN/AEI /10.13039/501100011033 by “ESF Investing in your future”; MSCA Postdoctoral Fellowship (Grant agreement ID 101059332); Google Research Scholar Program; 2022 Leonardo Grant for Researchers and Cultural Creators from BBVA Foundation; European Union’s Horizon 2020 research and innovation programme under grant agreement No 765955 (ANIMATAS Innovative Training Network); European Union’s Horizon 2020 research and innovation programme under grant agreement No. 826078 (Feature Cloud); PNRR project INEST - Interconnected North-East Innovation Ecosystem (ECS00000043), under the NRRP MUR program funded by the NextGenerationEU; PNRR project FAIR - Future AI Research (PE00000013), under the NRRP MUR program funded by the NextGenerationEUResumen
The past years have been characterized by an upsurge in opaque automatic decision support systems, such as
Deep Neural Networks (DNNs). Although DNNs have great generalization and prediction abilities, it is difficult
to obtain detailed explanations for their behavior. As opaque Machine Learning models are increasingly
being employed to make important predictions in critical domains, there is a danger of creating and using
decisions that are not justifiable or legitimate. Therefore, there is a general agreement on the importance of
endowing DNNs with explainability. EXplainable Artificial Intelligence (XAI) techniques can serve to verify
and certify model outputs and enhance them with desirable notions such as trustworthiness, accountability,
transparency, and fairness. This guide is intended to be the go-to handbook for anyone with a computer
science background aiming to obtain an intuitive insight from Machine Learning models accompanied by
explanations out-of-the-box. The article aims to rectify the lack of a practical XAI guide by applying XAI
techniques, in particular, day-to-day models, datasets and use-cases. In each chapter, the reader will find a
description of the proposed method as well as one or several examples of use with Python notebooks. These
can be easily modified to be applied to specific applications. We also explain what the prerequisites are for
using each technique, what the user will learn about them, and which tasks they are aimed at.