PLENARY: Explaining black-box models in natural language through fuzzy linguistic summaries
Metadata
Show full item recordEditorial
Elsevier
Materia
eXplainable Artificial Intelligence Linguistic summaries Granular computing Fuzzy linguistic descriptions Machine learning Neural networks Bipolar disorders
Date
2022-10-08Referencia bibliográfica
Katarzyna Kaczmarek-Majer... [et al.]. PLENARY: Explaining black-box models in natural language through fuzzy linguistic summaries, Information Sciences, Volume 614, 2022, Pages 374-399, ISSN 0020-0255, [https://doi.org/10.1016/j.ins.2022.10.010]
Sponsorship
Small Grants Scheme within the research project "Bipolar disorder prediction with sensor-based semi-supervised Learning (BIPOLAR)" NOR/SGS/BIPO LAR/0239/2020-00; European Commission RPMA.01.02.00-14-5706/16-00; Systems Research Institute Polish Academy of Sciences; Juan de la Cierva Incorporacion grant - MCIN/AEI IJC2019-039152-I; Google Research Scholar Program; Italian Ministry of University and Research through the European PON project AIM (Attraction and International Mobility) 1852414Abstract
We introduce an approach called PLENARY (exPlaining bLack-box modEls in Natural
lAnguage thRough fuzzY linguistic summaries), which is an explainable classifier based
on a data-driven predictive model. Neural learning is exploited to derive a predictive model
based on two levels of labels associated with the data. Then, model explanations are
derived through the popular SHapley Additive exPlanations (SHAP) tool and conveyed in
a linguistic form via fuzzy linguistic summaries. The linguistic summarization allows translating
the explanations of the model outputs provided by SHAP into statements expressed
in natural language. PLENARY accounts for the imprecision related to model outputs by
summarizing them into simple linguistic statements and for the imprecision related to
the data labeling process by including additional domain knowledge in the form of
middle-layer labels. PLENARY is validated on preprocessed speech signals collected from
smartphones from patients with bipolar disorder and on publicly available mental health
survey data. The experiments confirm that fuzzy linguistic summarization is an effective
technique to support meta-analyses of the outputs of AI models. Also, PLENARY improves
explainability by aggregating low-level attributes into high-level information granules, and
by incorporating vague domain knowledge into a multi-task sequential and compositional
multilayer perceptron. SHAP explanations translated into fuzzy linguistic summaries significantly
improve understanding of the predictive modelling process and its outputs.