On generating trustworthy counterfactual explanations
Metadatos
Mostrar el registro completo del ítemAutor
Del Ser, Javier; Barredo Arrieta, Alejandro; Díaz Rodríguez, Natalia Ana; Herrera Triguero, Francisco; Saranti, Anna; Holzinger, AndreasEditorial
Elsevier
Materia
Explainable artificial intelligence Deep learning Counterfactual explanations
Fecha
2023-11-17Referencia bibliográfica
J. Del Ser, A. Barredo-Arrieta, N. Díaz-Rodríguez et al. Information Sciences 655 (2024) 119898 [https://doi.org/10.1016/j.ins.2023.119898]
Patrocinador
Basque Government (Eusko Jaurlaritza) through the Consolidated Research Group MATHMODE (IT1256-22); Centro para el Desarrollo Tecnologico Industrial (CDTI); European Union (AI4ES project, grant no. CER-20211030); Austrian Science Fund (FWF), Project: P-32554; European Union’s Horizon 2020 research and innovation programme under grant agreement No. 826078 (Feature Cloud); Juan de la Cierva Incorporación contract (IJC2019-039152-I); Google Research Scholar Programme 2021; Marie Skłodowska-Curie Actions (MSCA) Postdoctoral Fellowship with agreement ID: 101059332Resumen
Deep learning models like chatGPT exemplify AI success but necessitate a deeper understanding of trust in critical sectors. Trust can be achieved using counterfactual explanations, which is how humans become familiar with unknown processes; by understanding the hypothetical input circumstances under which the output changes. We argue that the generation of counterfactual explanations requires several aspects of the generated counterfactual instances, not just their counterfactual ability. We present a framework for generating counterfactual explanations that formulate its goal as a multiobjective optimization problem balancing three objectives: plausibility; the intensity of changes; and adversarial power. We use a generative adversarial network to model the distribution of the input, along with a multiobjective counterfactual discovery solver balancing these objectives. We demonstrate the usefulness of six classification tasks with image and 3D data confirming with evidence the existence of a trade-off between the objectives, the consistency of the produced counterfactual explanations with human knowledge, and the capability of the framework to unveil the existence of concept-based biases and misrepresented attributes in the input domain of the audited model. Our pioneering effort shall inspire further work on the generation of plausible counterfactual explanations in real-world scenarios where attribute-/concept-based annotations are available for the domain under analysis.