TY - GEN AU - GarcĂ­a Cabello, Julia PY - 2023 UR - https://hdl.handle.net/10481/86197 AB - For Deep Neural Networks (DNN), the standard gradient-based algorithms may not be efficient because of the raised computational expense resulting from the increase in the number of layers. This paper offers an alternative to the classic training... LA - eng PB - Elsevier KW - Separable function KW - Principle of optimality KW - Composition of parametric functions KW - Universal approximators of continuous functions TI - Improved Deep Neural Network Performance under Dynamic Programming mode DO - 10.1016/j.neucom.2023.126785 ER -