Mostrar el registro sencillo del ítem

dc.contributor.authorCapel Tuñón, Manuel Isidoro 
dc.date.accessioned2024-09-04T09:57:20Z
dc.date.available2024-09-04T09:57:20Z
dc.date.issued2024-08-26
dc.identifier.urihttps://hdl.handle.net/10481/93905
dc.description.abstractThe training phase of a deep learning neural network (DLNN) is a computationally demanding process, particularly for models comprising multiple layers of intermediate neurons.This paper presents a novel approach to accelerating DLNN training using the particle swarm optimisation (PSO) algorithm, which exploits the GPGPU architecture and the Apache Spark analytics engine for large-scale data processing tasks. PSO is a bio-inspired stochastic optimisation method whose objective is to iteratively enhance the solution to a (usually complex) problem by approximating a given objective. The expensive fitness evaluation and updating of particle positions can be supported more effectively by parallel processing. Nevertheless, the parallelisation of an efficient PSO is not a simple process due to the complexity of the computations performed on the swarm of particles and the iterative execution of the algorithm until a solution close to the objective with minimal error is achieved. In this study, two forms of parallelisation have been developed for the PSO algorithm, both of which are designed for execution in a distributed execution environment. The synchronous parallel PSO implementation guarantees consistency but may result in idle time due to global synchronisation. In contrast, the asynchronous parallel PSO approach reduces the necessity for global synchronization, thereby enhancing execution time and making it more appropriate for large datasets and distributed environments such as Apache Spark. The two variants of PSO have been implemented with the objective of distributing the computational load supported by the algorithm across the different executor nodes of the Spark cluster to effectively achieve coarse-grained parallelism. The result is a significant performance improvement over current sequential variants of PSO.es_ES
dc.description.sponsorshipSistemas Concurrentes (TIC-157)es_ES
dc.description.sponsorshipMinisterio de Ciencia e Innovación (PID2020-112495RB-C21)es_ES
dc.language.isoenges_ES
dc.publisherMDPIes_ES
dc.rightsAtribución-NoComercial 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc/4.0/*
dc.subjectApache Sparkes_ES
dc.subjectclassification recalles_ES
dc.subjectdeep neural networkses_ES
dc.subjectGPU parallelismes_ES
dc.subjectoptimization researches_ES
dc.subjectparticle swarm optimization (PSO)es_ES
dc.subjectpredictive accuracyes_ES
dc.titleParallel PSO for Efficient Neural Network Training Using GPGPU and Apache Spark in Edge Computing Setses_ES
dc.typejournal articlees_ES
dc.rights.accessRightsopen accesses_ES
dc.identifier.doidoi.org/10.3390/a17090378
dc.type.hasVersionAMes_ES


Ficheros en el ítem

[PDF]

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Atribución-NoComercial 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como Atribución-NoComercial 4.0 Internacional