Performance of Cross-Validated Targeted Maximum Likelihood Estimation
Metadatos
Mostrar el registro completo del ítemEditorial
John Wiley & Sons Ltd
Materia
Causal inference Data sparsity Donsker class condition
Fecha
2025-07-17Referencia bibliográfica
M. J. Smith, R. V. Phillips, C. Maringe, and M. A. Luque-Fernandez, “ Performance of Cross-Validated Targeted Maximum Likelihood Estimation,” Statistics in Medicine 44, no. 15-17 (2025): e70185, https://doi.org/10.1002/sim.70185
Resumen
Background: Advanced methods for causal inference, such as targeted maximum likelihood estimation (TMLE), require specific
convergence rates and the Donsker class condition for valid statistical estimation and inference. In situations where there is no
differentiability due to data sparsity or near-positivity violations, the Donsker class condition is violated. In such instances, the bias
of the targeted estimand is inflated, and its variance is anti-conservative, leading to poor coverage. Cross-validation of the TMLE
algorithm (CVTMLE) is a straightforward, yet effective way to ensure efficiency, especially in settings where the Donsker class
condition is violated, such as random or near-positivity violations. We aim to investigate the performance of CVTMLE compared
to TMLE in various settings.
Methods: We utilized the data-generating mechanism described in Leger et al. (2022) to run a Monte Carlo experiment under
different Donsker class violations. Then, we evaluated the respective statistical performances of TMLE and CVTMLE with different
super learner libraries, with and without regression tree methods.
Results: We found that CVTMLE vastly improves confidence interval coverage without adversely affecting bias, particularly in
settings with small sample sizes and near-positivity violations. Furthermore, incorporating regression trees using standard TMLE
with ensemble super learner-based initial estimates increases bias and reduces variance, leading to invalid statistical inference.
Conclusions: We show through simulations that CVTMLE is much less sensitive to the choice of the super learner library and
thereby provides better estimation and inference in cases where the super learner library uses more flexible candidates and is
prone to overfitting.





