Optimizing dense feed-forward neural networks
Metadatos
Mostrar el registro completo del ítemFecha
2023-12-11Patrocinador
TIN2016-81113-R; PID2020-118224RB-I00; P18-TP-5168Resumen
Deep learning models have been widely used during the last decade
due to their outstanding learning and abstraction capacities. However,
one of the main challenges any scientist has to face using deep learning
models is to establish the network’s architecture. Due to this difficulty,
data scientists usually build over complex models and, as a result, most of
them result computationally intensive and impose a large memory foot-
print, generating huge costs, contributing to climate change and hindering
their use in computational-limited devices. In this paper, we propose a
novel feed-forward neural network constructing method based on pruning
and transfer learning. Its performance has been thoroughly assessed in
classification and regression problems. Without any accuracy loss, our ap-
proach can compress the number of parameters by more than 70%. Even
further, choosing the pruning parameter carefully, most of the refined
models outperform original ones. We also evaluate the transfer learn-
ing level comparing the refined model and the original one training from
scratch a neural network with the same hyper parameters as the optimized
model. The results obtained show that our constructing method not only
helps in the design of more efficient models but also more effective ones.