An extension on "statistical comparisons of classifiers over multiple data sets" for all pairwise comparisons
MetadatosMostrar el registro completo del ítem
Statistical methodsNon-parametric testMultiple comparison testsAdjusted p-valuesLogically related hypotheses
García López, S.; Herrera, F. An extension on "statistical comparisons of classifiers over multiple data sets" for all pairwise comparisons. Journal of Machine Learning Research, 9: 2677-2694 (2008). [http://hdl.handle.net/10481/32916]
PatrocinadorThis research has been supported by the project TIN2005-08386-C05-01. S. García holds a FPU scholarship from Spanish Ministry of Education and Science.
In a recently published paper in JMLR, Demsar (2006) recommends a set of non-parametric statistical tests and procedures which can be safely used for comparing the performance of classifiers over multiple data sets. After studying the paper, we realize that the paper correctly introduces the basic procedures and some of the most advanced ones when comparing a control method. However, it does not deal with some advanced topics in depth. Regarding these topics, we focus on more powerful proposals of statistical procedures for comparing n*n classifiers. Moreover, we illustrate an easy way of obtaining adjusted and comparable p-values in multiple comparison procedures.