Afficher la notice abrégée

dc.contributor.authorGarcía Cabello, Julia 
dc.date.accessioned2023-12-14T09:22:30Z
dc.date.available2023-12-14T09:22:30Z
dc.date.issued2023-11
dc.identifier.citationCabello, J. G. (2023). Improved deep neural network performance under dynamic programming mode. Neurocomputing, 559, 126785. https://doi.org/10.1016/j.neucom.2023.126785es_ES
dc.identifier.urihttps://hdl.handle.net/10481/86197
dc.descriptionFinancial support from the Spanish Ministry of Universities. “Disruptive group decision making systems in fuzzy context: Applications in smart energy and people analytics” (PID2019-103880RB-I00). Main Investigator: Enrique Herrera Viedma, and Junta de Andalucía. “Excellence Groups”, Spain (P12.SEJ.2463) and Junta de Andalucía, Spain (TIC186) are gratefully acknowledged. Research partially supported by the “Maria de Maeztu” Excellence Unit IMAG, reference CEX2020-001105-M, funded by MCIN/AEI/10.13039/501100011033/ .es_ES
dc.description.abstractFor Deep Neural Networks (DNN), the standard gradient-based algorithms may not be efficient because of the raised computational expense resulting from the increase in the number of layers. This paper offers an alternative to the classic training solutions: an in-depth study to find conditions under which the underlying Artificial Neural Networks ANN minimisation problem can be addressed from a Dynamic Programming (DP) perspective. Specifically, we prove that any ANN with monotonic activation is separable when regarded as a parametric function. Particularly, when the ANN is viewed as a network representation of a dynamical system (as a coupled cell network), we also prove that the transmission-of-signal law is separable provided the activation function is a monotone non-decreasing function. This strategy may have a positive impact on the performance of ANNs by improving their learning accuracy, particularly for DNNs. For our purposes, ANNs are also viewed as universal approximators of continuous functions and as abstract compositions of an even number of functions. This broader representation makes it easier to analyse them from many other perspectives (universal approximation issues, inverse problem solving) leading to a general improvement in knowledge on NNs and their performance.es_ES
dc.description.sponsorshipSpanish Ministry of Universitieses_ES
dc.description.sponsorshipPID2019-103880RB-I00es_ES
dc.description.sponsorship‘‘Excellence Groups’’, Spain (P12.SEJ.2463)es_ES
dc.description.sponsorshipJunta de Andalucía, Spain (TIC186)es_ES
dc.description.sponsorshipMCIN/AEI/10.13039/501100011033: CEX2020-001105-Mes_ES
dc.language.isoenges_ES
dc.publisherElsevieres_ES
dc.rightsCreative Commons Attribution-NonCommercial-NoDerivs 3.0 Licensees_ES
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es_ES
dc.subjectSeparable functiones_ES
dc.subjectPrinciple of optimalityes_ES
dc.subjectComposition of parametric functionses_ES
dc.subjectUniversal approximators of continuous functionses_ES
dc.titleImproved Deep Neural Network Performance under Dynamic Programming modees_ES
dc.typejournal articlees_ES
dc.rights.accessRightsopen accesses_ES
dc.identifier.doi10.1016/j.neucom.2023.126785
dc.type.hasVersionVoRes_ES


Fichier(s) constituant ce document

[PDF]

Ce document figure dans la(les) collection(s) suivante(s)

Afficher la notice abrégée

Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License
Excepté là où spécifié autrement, la license de ce document est décrite en tant que Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License