A parallel approach to accelerate neural network hyperparameter selection for energy forecasting Criado Ramón, David Baca Ruiz, Luis Gonzaga Pegalajar Jiménez, María Del Carmen We acknowledge financial support from Ministerio de Ciencia e Innovación (Spain) (Grant PID2020-112495RB-C21 funded by MCIN/ AEI /10.13039/501100011033). Finding the optimal hyperparameters of a neural network is a challenging task, usually done through a trial-and-error approach. Given the complexity of just training one neural network, particularly those with complex architectures and large input sizes, many implementations accelerated with GPU (Graphics Processing Unit) and distributed and parallel technologies have come to light over the past decade. However, whenever the complexity of the neural network used is simple and the number of features per sample is small, these implementations become lackluster and provide almost no benefit from just using the CPU (Central Processing Unit). As such, in this paper, we propose a novel parallelized approach that leverages GPU resources to simultaneously train multiple neural networks with different hyperparameters, maximizing resource utilization for smaller networks. The proposed method is evaluated on energy demand datasets from Spain and Uruguay, demonstrating consistent speedups of up to 1164x over TensorFlow and 410x over PyTorch. 2025-04-23T07:04:09Z 2025-04-23T07:04:09Z 2025-06-15 journal article D. Criado-Ramón, L.G.B. Ruiz, M.C. Pegalajar, A parallel approach to accelerate neural network hyperparameter selection for energy forecasting, Expert Systems with Applications, Volume 279, 2025, 127386, ISSN 0957-4174, https://doi.org/10.1016/j.eswa.2025.127386 https://hdl.handle.net/10481/103741 10.1016/j.eswa.2025.127386 eng http://creativecommons.org/licenses/by-nc-nd/4.0/ open access Attribution-NonCommercial-NoDerivatives 4.0 Internacional Elsevier