Mostrar el registro sencillo del ítem

dc.contributor.authorTrillo Domínguez, Magdalena 
dc.contributor.authorMartín Neira, Juan Ignacio
dc.contributor.authorOlvera Lobo, María Dolores 
dc.date.accessioned2026-03-06T08:25:40Z
dc.date.available2026-03-06T08:25:40Z
dc.date.issued2026-03-05
dc.identifier.citationTrillo Domínguez, M.; Martín Neira, J. I & Olvera Lobo, M. D. (2026). Dr. Google vs. Dr. ChatGPT in Online Health Self-Consultation: A Scoping Review of Accuracy, Bias, and Actionability (2023–2025). Informatics 2026, 13(3), 41; https://doi.org/10.3390/informatics13030041es_ES
dc.identifier.issn2227-9709
dc.identifier.urihttps://hdl.handle.net/10481/111927
dc.descriptionThis work has been supported by the project PID 2022-14015OB-100 funded by MCIN/AEI/10.13039/501100011033 (Ministry of Science and Innovation, State Research Agency, Spain) and by “ERDF A way of making Europe” (European Union).es_ES
dc.description.abstractThe rapid adoption of generative artificial intelligence (AI) systems has transformed health information seeking, raising questions about their role as intermediaries in non-professional health self-consultation. This study compares Google Search and ChatGPT as paradigmatic models of algorithmic mediation of health information, focusing on accuracy, biases, information quality and potential harms. A scoping review was conducted following the PRISMA-ScR framework. Empirical studies published between 2023 and 2025 were retrieved from PubMed/MEDLINE, Web of Science (WoS) and Scopus. After screening and eligibility assessment, 63 original empirical studies were included. The results indicate that ChatGPT consistently outperforms Google Search in terms of factual accuracy and information quality, achieving moderate to high DISCERN scores (4–5 out of 5) and showing moderate to strong correlations with expert clinical evaluations. Users also tend to value ChatGPT responses positively due to their clarity, coherence and perceived empathy. However, these advantages coexist with significant structural limitations. Hallucinations are reported in an estimated 31–45% of references, source provenance remains opaque, linguistic complexity is high, and actionability is limited, with only around 40% of responses providing clearly actionable guidance. In contrast, Google Search offers greater source traceability and verifiability, but at the cost of fragmented information and higher exposure to commercial content. The review identifies critical research gaps related to behavioural impacts, critical health literacy, equity of access, professional integration and vulnerable contexts. Overall, the findings highlight the need for hybrid human–AI models, professional mediation and critical AI literacy to ensure safe, equitable and trustworthy use of generative AI in public health communication.es_ES
dc.description.sponsorshipMCIN/AEI/10.13039/501100011033 (PID 2022-14015OB-100)es_ES
dc.description.sponsorshipERDFes_ES
dc.language.isoenges_ES
dc.publisherMDPIes_ES
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectHealth self-consultationes_ES
dc.subjectGenerative artificial intelligencees_ES
dc.subjectChatGPTes_ES
dc.titleDr. Google vs. Dr. ChatGPT in Online Health Self-Consultation: A Scoping Review of Accuracy, Bias, and Actionability (2023–2025)es_ES
dc.typejournal articlees_ES
dc.rights.accessRightsopen accesses_ES
dc.identifier.doi10.3390/informatics13030041
dc.type.hasVersionVoRes_ES


Ficheros en el ítem

[PDF]

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como Attribution-NonCommercial-NoDerivatives 4.0 Internacional