Mostrar el registro sencillo del ítem

dc.contributor.authorLuengo, Mohammad
dc.contributor.authorSultan Mahmud, Mohammad
dc.contributor.authorZheng, Hua
dc.contributor.authorGarcía-Gil, Diego 
dc.contributor.authorGarcía, Salvador
dc.contributor.authorZhexue Huang, Joshua
dc.date.accessioned2025-06-23T07:06:19Z
dc.date.available2025-06-23T07:06:19Z
dc.date.issued2025-05
dc.identifier.urihttps://hdl.handle.net/10481/104731
dc.description.abstractLarge-scale data clustering needs an approximate approach for improving computation efficiency and data scalability. In this paper, we propose a novel method for ensemble clustering of large-scale datasets that uses the Random Sample Partition and Clustering Approximation (RSPCA) to tackle the problems of big data computing in cluster analysis. In the RSPCA computing framework, a big dataset is first partitioned into a set of disjoint random samples, called RSP data blocks that remain distributions consistent with that of the original big dataset. In ensemble clustering, a few RSP data blocks are randomly selected, and a clustering operation is performed independently on each data block to generate the clustering result of the data block. All clustering results of selected data blocks are aggregated to the ensemble result as an approximate result of the entire big dataset. To improve the robustness of the ensemble result, the ensemble clustering process can be conducted incrementally using multiple batches of selected RSP data blocks. To improve computation efficiency, we use the I-niceDP algorithm to automatically find the number of clusters in RSP data blocks and the -means algorithm to determine more accurate cluster centroids in RSP data blocks as inputs to the ensemble process. Spectral and correlation clustering methods are used as the consensus functions to handle irregular clusters. Comprehensive experiment results on both real and synthetic datasets demonstrate that the ensemble of clustering results on a few RSP data blocks is sufficient for a good global discovery of the entire big dataset, and the new approach is computationally efficient and scalable to big data.es_ES
dc.description.sponsorshipThis research has been supported by the Key Basic Research Foundation of Shenzhen under Grant No. JCYJ20220818100205012es_ES
dc.description.sponsorshippartially supported by Project PID2023-150070NB-I00 by MICINN/AEIes_ES
dc.description.sponsorshippart of the I+D+i project granted by C-ING-250-UGR23 co-funded by Consejería de Universidad, Investigación e Innovación and for the European Union related to the FEDER Andalucía Program 2021-2027es_ES
dc.language.isoenges_ES
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectClustering approximationes_ES
dc.subjectEnsemble clusteringes_ES
dc.subjectIncremental clusteringes_ES
dc.subjectEnsemble learninges_ES
dc.titleRSPCA: Random Sample Partition and Clustering Approximation for ensemble learning of big dataes_ES
dc.typejournal articlees_ES
dc.rights.accessRightsopen accesses_ES
dc.identifier.doihttps://doi.org/10.1016/j.patcog.2024.111321


Ficheros en el ítem

[PDF]

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como Attribution-NonCommercial-NoDerivatives 4.0 Internacional