Principal Components Analysis Random Discretization Ensemble for Big Data
Identificadores
URI: https://hdl.handle.net/10481/99369Metadata
Show full item recordAuthor
García Gil, Diego Jesús; Ramírez-Gallego, Sergio; García López, Salvador; Herrera Triguero, FranciscoEditorial
Knowledge-Based Systems
Materia
big data Discretization Spark Decision Tree PCA Data reduction
Date
2018-06Referencia bibliográfica
García-Gil, D., Ramírez-Gallego, S., García, S., & Herrera, F. (2018). Principal components analysis random discretization ensemble for big data. Knowledge-Based Systems, 150, 166-174.
Sponsorship
This work is supported by FEDER , the Spanish National Research Project TIN2014-57251-P and TIN2017-89517-P , and the Project BigDaP-TOOLS - Ayudas Fundación BBVA a Equipos de Investigación Científica 2016.Abstract
Humongous amounts of data have created a lot of challenges in terms of data computation and analysis. Classic data mining techniques are not prepared for the new space and time requirements. Discretization and dimensionality reduction are two of the data reduction tasks in knowledge discovery. Random Projection Random Discretization is a novel and recently proposed ensemble method by Ahmad and Brown in 2014 that performs discretization and dimensionality reduction to create more informative data. Despite the good efficiency of random projections in dimensionality reduction, more robust methods like Principal Components Analysis (PCA) can improve the performance.
We propose a new ensemble method to overcome this drawback using the Apache Spark platform and PCA for dimension reduction, named Principal Components Analysis Random Discretization Ensemble. Experimental results on five large-scale datasets show that our solution outperforms both the original algorithm and Random Forest in terms of prediction performance. Results also show that high dimensionality data can affect the runtime of the algorithm.