A parallel approach to accelerate neural network hyperparameter selection for energy forecasting
Metadata
Show full item recordEditorial
Elsevier
Date
2025-06-15Referencia bibliográfica
D. Criado-Ramón, L.G.B. Ruiz, M.C. Pegalajar, A parallel approach to accelerate neural network hyperparameter selection for energy forecasting, Expert Systems with Applications, Volume 279, 2025, 127386, ISSN 0957-4174, https://doi.org/10.1016/j.eswa.2025.127386
Sponsorship
MCIN/ AEI /10.13039/501100011033 PID2020-112495RB-C21Abstract
Finding the optimal hyperparameters of a neural network is a challenging task, usually done through a trial-and-error approach. Given the complexity of just training one neural network, particularly those with complex architectures and large input sizes, many implementations accelerated with GPU (Graphics Processing Unit) and distributed and parallel technologies have come to light over the past decade. However, whenever the complexity of the neural network used is simple and the number of features per sample is small, these implementations become lackluster and provide almost no benefit from just using the CPU (Central Processing Unit). As such, in this paper, we propose a novel parallelized approach that leverages GPU resources to simultaneously train multiple neural networks with different hyperparameters, maximizing resource utilization for smaller networks. The proposed method is evaluated on energy demand datasets from Spain and Uruguay, demonstrating consistent speedups of up to 1164x over TensorFlow and 410x over PyTorch.