Show simple item record

dc.contributor.authorJ. Smith, Matthew
dc.contributor.authorV. Phillips, Rachael
dc.contributor.authorMaringe, Camille
dc.contributor.authorLuque Fernández, Miguel Ángel
dc.date.accessioned2025-09-18T11:17:05Z
dc.date.available2025-09-18T11:17:05Z
dc.date.issued2025-07-17
dc.identifier.citationM. J. Smith, R. V. Phillips, C. Maringe, and M. A. Luque-Fernandez, “ Performance of Cross-Validated Targeted Maximum Likelihood Estimation,” Statistics in Medicine 44, no. 15-17 (2025): e70185, https://doi.org/10.1002/sim.70185es_ES
dc.identifier.urihttps://hdl.handle.net/10481/106438
dc.description.abstractBackground: Advanced methods for causal inference, such as targeted maximum likelihood estimation (TMLE), require specific convergence rates and the Donsker class condition for valid statistical estimation and inference. In situations where there is no differentiability due to data sparsity or near-positivity violations, the Donsker class condition is violated. In such instances, the bias of the targeted estimand is inflated, and its variance is anti-conservative, leading to poor coverage. Cross-validation of the TMLE algorithm (CVTMLE) is a straightforward, yet effective way to ensure efficiency, especially in settings where the Donsker class condition is violated, such as random or near-positivity violations. We aim to investigate the performance of CVTMLE compared to TMLE in various settings. Methods: We utilized the data-generating mechanism described in Leger et al. (2022) to run a Monte Carlo experiment under different Donsker class violations. Then, we evaluated the respective statistical performances of TMLE and CVTMLE with different super learner libraries, with and without regression tree methods. Results: We found that CVTMLE vastly improves confidence interval coverage without adversely affecting bias, particularly in settings with small sample sizes and near-positivity violations. Furthermore, incorporating regression trees using standard TMLE with ensemble super learner-based initial estimates increases bias and reduces variance, leading to invalid statistical inference. Conclusions: We show through simulations that CVTMLE is much less sensitive to the choice of the super learner library and thereby provides better estimation and inference in cases where the super learner library uses more flexible candidates and is prone to overfitting.es_ES
dc.language.isoenges_ES
dc.publisherJohn Wiley & Sons Ltdes_ES
dc.rightsAtribución 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectCausal inferencees_ES
dc.subjectData sparsityes_ES
dc.subjectDonsker class conditiones_ES
dc.titlePerformance of Cross-Validated Targeted Maximum Likelihood Estimationes_ES
dc.typejournal articlees_ES
dc.rights.accessRightsopen accesses_ES
dc.identifier.doi10.1002/sim.70185
dc.type.hasVersionVoRes_ES


Files in this item

[PDF]

This item appears in the following Collection(s)

Show simple item record

Atribución 4.0 Internacional
Except where otherwise noted, this item's license is described as Atribución 4.0 Internacional