MRPR: A MapReduce Solution for Prototype Reduction in Big Data Classification
Metadatos
Mostrar el registro completo del ítemAutor
Triguero, Isaac; Peralta, Daniel; Bacardit, Jaume; García López, Salvador; Herrera Triguero, FranciscoEditorial
Elsevier
Materia
Big Data Mahout Hadoop Prototype reduction Prototype generation Nearest neighbor classification
Fecha
2014-03-03Referencia bibliográfica
Published version: Triguero, I., Peralta, D., Bacardit, J., García, S., & Herrera, F. (2015). MRPR: a MapReduce solution for prototype reduction in big data classification. neurocomputing, 150, 331-345. [https://doi.org/10.1016/j.neucom.2014.04.078]
Patrocinador
German Research Foundation (DFG) FPU12/04902; TIN2011-28488; P10-TIC-6858; P11-TIC-7765Resumen
In the era of big data, analyzing and extracting knowledge from large-scale data sets is a very interesting and challenging task. The application of standard data mining tools in such data sets is not straightforward. Hence, a new class of scalable mining method that embraces the huge storage and processing capacity of cloud platforms is required. In this work, we propose a novel distributed partitioning methodology for prototype reduction techniques in nearest neighbor classification. These methods aim at representing original training data sets as a reduced number of instances. Their main purposes are to speed up the classification process and reduce the storage requirements and sensitivity to noise of the nearest neighbor rule. However, the standard prototype reduction methods cannot cope with very large data sets. To overcome this limitation, we develop a MapReduce-based framework to distribute the functioning of these algorithms through a cluster of computing elements, proposing several algorithmic strategies to integrate multiple partial solutions (reduced sets of prototypes) into a single one. The proposed model enables prototype reduction algorithms to be applied over big data classification problems without significant accuracy loss. We test the speeding up capabilities of our model with data sets up to 5.7 millions of instances. The results show that this model is a suitable tool to enhance the performance of the nearest neighbor classifier with big data.