Explainable classifier with adaptive optimisation for medical data
Metadatos
Mostrar el registro completo del ítemAutor
Trillo Vílchez, José Ramón; Moral Ávila, María José Del; Tapia García, Juan Miguel; García Cabello, Julia; Cabrerizo Lorite, Francisco JavierEditorial
Springer Nature
Materia
Explainable Artificial Intelligence Fuzzy classifier Genetic algorithms
Fecha
2026-02-06Referencia bibliográfica
Trillo, J.R., Del Moral, M.J., Tapia, J.M. et al. Explainable classifier with adaptive optimisation for medical data. Appl Intell 56, 77 (2026). https://doi.org/10.1007/s10489-025-07081-1
Patrocinador
MICIU/AEI/10.13039/501100011033 PID2022-139297OB-I00; ERDF/EU; Regional Ministry of University, Research and Innovation C-ING-165-UGR23; European Union - Andalusia ERDF Program 2021-2027; Universidad de Granada/CBUAResumen
Artificial Intelligence (AI) has become increasingly important in critical domains such as medicine, where accurate and interpretable decision-making is essential. However, many high-performing AI models operate as “black boxes”, limiting transparency and making it difficult for clinicians to understand or verify predictions. To address this challenge, we present an eXplainable Artificial Intelligence (XAI) framework that integrates a fuzzy rule-based classifier with genetic algorithms and 2-tuple linguistic representations. The method incrementally generates general fuzzy rules, introduces fuzzy exception rules to capture atypical cases, and applies rule selection and parameter tuning to enhance both accuracy and interpretability. Experiments on nine medical datasets demonstrate that our approach achieves competitive or superior accuracy compared to state-of-the-art algorithms, while requiring fewer rules. These results show that the method not only improves predictive performance but also provides clear, human-readable explanations for each decision, thereby increasing trust and facilitating its application in medical practice.





