Improved Deep Neural Network Performance under Dynamic Programming mode García Cabello, Julia Separable function Principle of optimality Composition of parametric functions Universal approximators of continuous functions Financial support from the Spanish Ministry of Universities. “Disruptive group decision making systems in fuzzy context: Applications in smart energy and people analytics” (PID2019-103880RB-I00). Main Investigator: Enrique Herrera Viedma, and Junta de Andalucía. “Excellence Groups”, Spain (P12.SEJ.2463) and Junta de Andalucía, Spain (TIC186) are gratefully acknowledged. Research partially supported by the “Maria de Maeztu” Excellence Unit IMAG, reference CEX2020-001105-M, funded by MCIN/AEI/10.13039/501100011033/ . For Deep Neural Networks (DNN), the standard gradient-based algorithms may not be efficient because of the raised computational expense resulting from the increase in the number of layers. This paper offers an alternative to the classic training solutions: an in-depth study to find conditions under which the underlying Artificial Neural Networks ANN minimisation problem can be addressed from a Dynamic Programming (DP) perspective. Specifically, we prove that any ANN with monotonic activation is separable when regarded as a parametric function. Particularly, when the ANN is viewed as a network representation of a dynamical system (as a coupled cell network), we also prove that the transmission-of-signal law is separable provided the activation function is a monotone non-decreasing function. This strategy may have a positive impact on the performance of ANNs by improving their learning accuracy, particularly for DNNs. For our purposes, ANNs are also viewed as universal approximators of continuous functions and as abstract compositions of an even number of functions. This broader representation makes it easier to analyse them from many other perspectives (universal approximation issues, inverse problem solving) leading to a general improvement in knowledge on NNs and their performance. 2023-12-14T09:22:30Z 2023-12-14T09:22:30Z 2023-11 journal article Cabello, J. G. (2023). Improved deep neural network performance under dynamic programming mode. Neurocomputing, 559, 126785. https://doi.org/10.1016/j.neucom.2023.126785 https://hdl.handle.net/10481/86197 10.1016/j.neucom.2023.126785 eng http://creativecommons.org/licenses/by-nc-nd/3.0/ open access Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License Elsevier