Afficher la notice abrégée

dc.contributor.authorDelgado Rodríguez, Janet 
dc.contributor.authorRueda Etxebarria, Jon 
dc.date.accessioned2022-09-05T10:48:15Z
dc.date.available2022-09-05T10:48:15Z
dc.date.issued2022-07-20
dc.identifier.citationDelgado, J... [et al.]. Bias in algorithms of AI systems developed for COVID-19: A scoping review. Bioethical Inquiry (2022). [https://doi.org/10.1007/s11673-022-10200-z]es_ES
dc.identifier.urihttp://hdl.handle.net/10481/76524
dc.description.abstractTo analyze which ethically relevant biases have been identified by academic literature in artificial intelligence (AI) algorithms developed either for patient risk prediction and triage, or for contact tracing to deal with the COVID-19 pandemic. Additionally, to specifically investigate whether the role of social determinants of health (SDOH) have been considered in these AI developments or not. We conducted a scoping review of the literature, which covered publications from March 2020 to April 2021. Studies mentioning biases on AI algorithms developed for contact tracing and medical triage or risk prediction regarding COVID-19 were included. From 1054 identified articles, 20 studies were finally included. We propose a typology of biases identified in the literature based on bias, limitations and other ethical issues in both areas of analysis. Results on health disparities and SDOH were classified into five categories: racial disparities, biased data, socioeconomic disparities, unequal accessibility and workforce, and information communication. SDOH needs to be considered in the clinical context, where they still seem underestimated. Epidemiological conditions depend on geographic location, so the use of local data in studies to develop international solutions may increase some biases. Gender bias was not specifically addressed in the articles included. The main biases are related to data collection and management. Ethical problems related to privacy, consent, and lack of regulation have been identified in contact tracing while some biasrelated health inequalities have been highlighted. There is a need for further research focusing on SDOH and these specific AI apps.es_ES
dc.description.sponsorshipUniversitat Autonoma de Barcelonaes_ES
dc.description.sponsorshipBBVA Foundationes_ES
dc.language.isoenges_ES
dc.publisherSpringeres_ES
dc.rightsAtribución 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectArtificial intelligence es_ES
dc.subjectBiases_ES
dc.subjectDigital contact tracinges_ES
dc.subjectCOVID-19es_ES
dc.subjectPatient risk predictiones_ES
dc.titleBias in algorithms of AI systems developed for COVID-19: A scoping reviewes_ES
dc.typejournal articlees_ES
dc.rights.accessRightsopen accesses_ES
dc.identifier.doi10.1007/s11673-022-10200-z
dc.type.hasVersionVoRes_ES


Fichier(s) constituant ce document

[PDF]

Ce document figure dans la(les) collection(s) suivante(s)

Afficher la notice abrégée

Atribución 4.0 Internacional
Excepté là où spécifié autrement, la license de ce document est décrite en tant que Atribución 4.0 Internacional