Frameworks encompassing intersectional perspective of artificial intelligence in healthcare. Scoping review Author links open overlay panel
Metadatos
Mostrar el registro completo del ítemEditorial
Elsevier
Materia
Artificial Intelligence (AI) Intersectional bias Healthcare frameworks
Fecha
2026-06Referencia bibliográfica
Amaya-Santos, S., Vargas, C., & Bermúdez-Tamayo, C. (2026). Frameworks encompassing intersectional perspective of artificial intelligence in healthcare. Scoping review. Public Health in Practice (Oxford, England), 11(100713), 100713. https://doi.org/10.1016/j.puhip.2025.100713
Resumen
Objectives:
This study systematically evaluates how existing AI frameworks in healthcare address intersectional bias across the AI lifecycle and explores the mitigation strategies proposed.
Study design:
Scoping review.
Methods:
A scoping review was conducted per PRISMA-ScR guidelines, analyzing studies from 2014 to 2024. Searches included MEDLINE (Ovid), PubMed, EMBASE (Ovid), SCOPUS, ESCI, IEEE Xplore, and Google Scholar. Data were extracted on bias-related challenges and mitigation strategies across AI lifecycle phases (development, validation, implementation, monitoring). Studies were ranked by inclusivity (high, medium, or low).
Results:
Of 374 records, 43 studies met inclusion criteria, primarily from high-income countries. Gender/sex (51.2 %) and race/ethnicity (44.2 %) were the most addressed dimensions, while disability (14 %) and citizenship (9.3 %) were least addressed. Inclusivity was categorized as high (21 studies, 48.8 %), medium (23.2 %), or low (27.9 %). Overall, 14 biases and 21 mitigation strategies were identified.
Conclusions
Significant gaps remain in addressing intersectional biases in AI frameworks, particularly for underrepresented groups such as individuals with disabilities and non-citizens. Despite many frameworks demonstrating efforts toward inclusivity, attention to intersectionality remains uneven and largely inconsistent. Mapping biases to lifecycle phases highlights actionable strategies to improve equity and inclusivity in AI-driven healthcare. These findings provide valuable guidance for researchers, policymakers, and developers to create equitable and responsible AI systems.





