Mostrar el registro sencillo del ítem

dc.contributor.authorFemia Marzo, Pedro 
dc.contributor.authorMartín Andrés, Antonio 
dc.date.accessioned2023-10-02T11:45:17Z
dc.date.available2023-10-02T11:45:17Z
dc.date.issued2023-02-02
dc.identifier.citationFemia Marzo, P. & Martín Andrés (2023) Software Delta. Degree of agreement in nominal scale between two raters and assessment of the degree of knowledge in multiple-choice tests.es_ES
dc.identifier.urihttps://hdl.handle.net/10481/84794
dc.description.abstractWhen two raters independently classify n objects within K nominal categories, the level of agreement between them is usually assessed by means of Cohen’s Kappa coefficient. However, the coefficient Kappa has been the subject to several criticisms. Additionally, when a more detailed analysis is needed, it requires the evaluation of the degree of agreement class by class, and traditionally, non-chance corrected indexes are used for this purpose. Model Delta, does not possess the limitations aforementioned of kappa and it allows to define measures of agreement class by class which are chance-corrected.es_ES
dc.language.isoenges_ES
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectAgreementes_ES
dc.subjectMultiple-choice testses_ES
dc.subjectsoftwarees_ES
dc.titleDelta. Comparison of agreement in nominal scale between two raters and assessment of the degree of knowledge in multiple-choice tests.es_ES
dc.typeotheres_ES
dc.rights.accessRightsopen accesses_ES
dc.type.hasVersionVoRes_ES


Ficheros en el ítem

[File]
[File]

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como Attribution-NonCommercial-NoDerivatives 4.0 Internacional