Errors of measurement in scientometrics: Classification schemes and document types in citation and publication rankings
Identificadores
URI: https://hdl.handle.net/10481/94477Metadatos
Mostrar el registro completo del ítemAutor
Robinson-Garcia, Nicolas; Vargas Quesada, Benjamín; Torres Salinas, Daniel; Chinchilla-Rodríguez, Zaida; Gorraiz, JuanMateria
Responsible metrics institutions rankings citation indicators publication counts classifications of science professional bibliometrics
Fecha
2024Referencia bibliográfica
Robinson-Garcia, N., Vargas-Quesada, B., Torres-Salinas, D., Chinchilla-Rodriguez, Z., & Gorraiz, J. (2024). Errors of measurement in scientometrics: Classification schemes and document types in citation and publication rankings. Scientometrics. https://doi.org/10.5281/zenodo.13752258
Patrocinador
Funding from the Spanish Ministry of Science (COMPARE project 30PID2020-117007RA-I00; and RESPONSIBLE project PID2021-128429NB-I00). Nicolas Robinson-Garcia is funded by a Ramón y Cajal grant from the Spanish Ministry of Science and Innovation (REF: RYC2019-027886-32I)Resumen
This research article delves into methodological challenges in scientometrics, focusing on errors stemming from the selection of classification schemes and document types. Employing two case studies, we examine the impact of these methodological choices on publication and citation rankings of institutions. We compute seven bibliometric indicators for over 8,434 institutions using 23 different classification schemes derived from Clarivate’s InCites suite, as well as including all document types versus only citable items. Given the critical role university rankings play in research management and their methodological controversies, our goal is to propose a methodology that incorporates uncertainty levels when reporting bibliometric performance in professional practice. We then delve into differences in error estimates within research fields as well as between institutions from different geographic regions. The findings underscore the importance of responsible metric use in research evaluation, providing valuable insights for both bibliometricians and consumers of such data.