Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence
Metadatos
Afficher la notice complèteAuteur
Holzinger, Andreas; Dehmer, Matthias; Emmert-Streib, Frank; Cucchiara, Rita; Augenstein, Isabelle; Del Ser, Javier; Samek, Wojciech; Jurisica, Igor; Díaz Rodríguez, Natalia AnaEditorial
Elsevier
Materia
Artificial intelligence Information fusion Medical AI Explainable AI Robustness Explainability Trust Graph-based machine learning Neural-symbolic learning and reasoning
Date
2021-11-12Referencia bibliográfica
Andreas Holzinger... [et al.]. Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence, Information Fusion, Volume 79, 2022, Pages 263-278, ISSN 1566-2535, [https://doi.org/10.1016/j.inffus.2021.10.007]
Patrocinador
Austrian Science Fund (FWF) P-32554; European Union's Horizon 2020 research and innovation program 826078 965221; Spanish Government Juan de la Cierva Incorporacion IJC2019-039152-I; DFF Sapere Aude research leader grant; Basque Government KK-2020/00049; consolidated research group MATHMODE T1294-19; Federal Ministry of Education & Research (BMBF) 01IS18025 A 01IS18037I 0310L0207C; Ontario Research Fund RDI 34876; Natural Sciences Research Council NSERC 203475; Canadian Institutes of Health Research (CIHR) 93579; Canada Foundation for Innovation CGIAR CFI 29272 225404 33536; International Business Machines (IBM); Ian Lawson van Toch Fund; Schroeder Arthritis Institute via the Toronto General and Western Hospital FoundationRésumé
Medical artificial intelligence (AI) systems have been remarkably successful, even outperforming human
performance at certain tasks. There is no doubt that AI is important to improve human health in many ways
and will disrupt various medical workflows in the future. Using AI to solve problems in medicine beyond
the lab, in routine environments, we need to do more than to just improve the performance of existing AI
methods. Robust AI solutions must be able to cope with imprecision, missing and incorrect information, and
explain both the result and the process of how it was obtained to a medical expert. Using conceptual knowledge
as a guiding model of reality can help to develop more robust, explainable, and less biased machine learning
models that can ideally learn from less data. Achieving these goals will require an orchestrated effort that
combines three complementary Frontier Research Areas: (1) Complex Networks and their Inference, (2) Graph
causal models and counterfactuals, and (3) Verification and Explainability methods. The goal of this paper is
to describe these three areas from a unified view and to motivate how information fusion in a comprehensive
and integrative manner can not only help bring these three areas together, but also have a transformative role
by bridging the gap between research and practical applications in the context of future trustworthy medical
AI. This makes it imperative to include ethical and legal aspects as a cross-cutting discipline, because all future
solutions must not only be ethically responsible, but also legally compliant.