Paying attention to cardiac surgical risk: An interpretable machine learning approach using an uncertainty-aware attentive neural network
Metadata
Show full item recordEditorial
Plos One
Date
2023-08-30Referencia bibliográfica
Penny-Dimri JC, Bergmeir C, Reid CM, Williams-Spence J, Cochrane AD, Smith JA (2023) Paying attention to cardiac surgical risk: An interpretable machine learning approach using an uncertainty-aware attentive neural network. PLoS ONE 18(8): e0289930. [https://doi.org/ 10.1371/journal.pone.0289930]
Sponsorship
The ANZSCTS Cardiac Surgery Database Program is funded by the Department of Health (Victoria), the Clinical Excellence Commission (NSW); Queensland Health (QLD); ANZSCTS Database Research activities are supported through a National Health and Medical Research Council Principal Research Fellowship (APP 1136372); Program Grant (APP 1092642)Abstract
Machine learning (ML) is increasingly applied to predict adverse postoperative outcomes in
cardiac surgery. Commonly used ML models fail to translate to clinical practice due to
absent model explainability, limited uncertainty quantification, and no flexibility to missing
data. We aimed to develop and benchmark a novel ML approach, the uncertainty-aware
attention network (UAN), to overcome these common limitations. Two Bayesian uncertainty
quantification methods were tested, generalized variational inference (GVI) or a posterior
network (PN). The UAN models were compared with an ensemble of XGBoost models and
a Bayesian logistic regression model (LR) with imputation. The derivation datasets consisted
of 153,932 surgery events from the Australian and New Zealand Society of Cardiac
and Thoracic Surgeons (ANZSCTS) Cardiac Surgery Database. An external validation consisted
of 7343 surgery events which were extracted from the Medical Information Mart for
Intensive Care (MIMIC) III critical care dataset. The highest performing model on the external
validation dataset was a UAN-GVI with an area under the receiver operating characteristic
curve (AUC) of 0.78 (0.01). Model performance improved on high confidence samples
with an AUC of 0.81 (0.01). Confidence calibration for aleatoric uncertainty was excellent for
all models. Calibration for epistemic uncertainty was more variable, with an ensemble of
XGBoost models performing the best with an AUC of 0.84 (0.08). Epistemic uncertainty was
improved using the PN approach, compared to GVI. UAN is able to use an interpretable and
flexible deep learning approach to provide estimates of model uncertainty alongside stateof-
the-art predictions. The model has been made freely available as an easy-to-use web
application demonstrating that by designing uncertainty-aware models with innately explainable
predictions deep learning may become more suitable for routine clinical use.