Delta. Comparison of agreement in nominal scale between two raters and assessment of the degree of knowledge in multiple-choice tests.
Identificadores
URI: https://hdl.handle.net/10481/84794Metadatos
Afficher la notice complèteMateria
Agreement Multiple-choice tests software
Date
2023-02-02Referencia bibliográfica
Femia Marzo, P. & Martín Andrés (2023) Software Delta. Degree of agreement in nominal scale between two raters and assessment of the degree of knowledge in multiple-choice tests.
Résumé
When two raters independently classify n objects within K nominal categories, the level of agreement between them is usually assessed by means of Cohen’s Kappa coefficient. However, the coefficient Kappa has been the subject to several criticisms. Additionally, when a more detailed analysis is needed, it requires the evaluation of the degree of agreement class by class, and traditionally, non-chance corrected indexes are used for this purpose. Model Delta, does not possess the limitations aforementioned of kappa and it allows to define measures of agreement class by class which are chance-corrected.