Multi-rater delta: extending the delta nominal measure of agreement between two raters to many raters
Metadata
Show full item recordEditorial
Taylor and Francis
Materia
Cohen's kappa Conger's kappa Delta agreement Fleiss’ kappa Hubert's kappa Nominal agreement
Date
2019Referencia bibliográfica
Publisher version: A. Martín Andrés & M. Álvarez Hernández (2021) Multi-rater delta: extending the delta nominal measure of agreement between two raters to many raters, Journal of Statistical Computation and Simulation, [DOI: 10.1080/00949655.2021.2013485]
Sponsorship
Spanish Ministry of the Economy, Industry and Competitiveness under grant number MTM2016-76938-P (co-financed by funding from FEDER)Abstract
The need to measure the degree of agreement among R raters who independently classify n subjects within K nominal categories is frequent in many scientific areas. The most popular measures are Cohen's kappa (R = 2), Fleiss' kappa, Conger's kappa and Hubert's kappa (R $\geq$ 2) coefficients, which have several defects. In 2004, the delta coefficient was defined for the case of R = 2, which did not have the defects of Cohen's kappa coefficient. This article extends the coefficient delta from R = 2 raters to R $\geq$ 2. The coefficient multi-rater delta has the same advantages as the coefficient delta with regard to the type kappa coefficients: i) it is intuitive and easy to interpret, because it refers to the proportion of replies that are concordant and non random; ii) the summands which give its value allow the degree of agreement in each category to be measured accurately, with no need to be collapsed; and iii) it is not affected by the marginal imbalance.