Delta. Comparison of agreement in nominal scale between two raters and assessment of the degree of knowledge in multiple-choice tests. Femia Marzo, Pedro Martín Andrés, Antonio Agreement Multiple-choice tests software When two raters independently classify n objects within K nominal categories, the level of agreement between them is usually assessed by means of Cohen’s Kappa coefficient. However, the coefficient Kappa has been the subject to several criticisms. Additionally, when a more detailed analysis is needed, it requires the evaluation of the degree of agreement class by class, and traditionally, non-chance corrected indexes are used for this purpose. Model Delta, does not possess the limitations aforementioned of kappa and it allows to define measures of agreement class by class which are chance-corrected. 2023-10-02T11:45:17Z 2023-10-02T11:45:17Z 2023-02-02 other Femia Marzo, P. & Martín Andrés (2023) Software Delta. Degree of agreement in nominal scale between two raters and assessment of the degree of knowledge in multiple-choice tests. https://hdl.handle.net/10481/84794 eng http://creativecommons.org/licenses/by-nc-nd/4.0/ open access Attribution-NonCommercial-NoDerivatives 4.0 Internacional