An extension on "statistical comparisons of classifiers over multiple data sets" for all pairwise comparisons
Metadatos
Mostrar el registro completo del ítemEditorial
MIT Press
Materia
Statistical methods Non-parametric test Multiple comparison tests Adjusted p-values Logically related hypotheses
Fecha
2008Referencia bibliográfica
García López, S.; Herrera, F. An extension on "statistical comparisons of classifiers over multiple data sets" for all pairwise comparisons. Journal of Machine Learning Research, 9: 2677-2694 (2008). [http://hdl.handle.net/10481/32916]
Patrocinador
This research has been supported by the project TIN2005-08386-C05-01. S. García holds a FPU scholarship from Spanish Ministry of Education and Science.Resumen
In a recently published paper in JMLR, Demsar
(2006) recommends a set of non-parametric statistical tests and procedures which can be safely used for comparing the performance
of classifiers over multiple data sets. After studying the paper, we realize that the
paper correctly introduces the basic procedures and some of the most advanced
ones when comparing a control method.
However, it does not deal with some advanced
topics in depth. Regarding these topics,
we focus on more powerful proposals of statistical procedures for comparing n*n classifiers.
Moreover, we illustrate an easy way of obtaining adjusted and comparable p-values
in multiple comparison procedures.