Show simple item record

dc.contributor.authorRomero Zaliz, Rocio Celeste 
dc.contributor.authorCantudo, Antonio
dc.contributor.authorJiménez Molinos, Francisco 
dc.contributor.authorRoldán Aranda, Juan Bautista 
dc.date.accessioned2022-01-07T08:39:35Z
dc.date.available2022-01-07T08:39:35Z
dc.date.issued2021-12-17
dc.identifier.citationRomero-Zaliz, R... [et al.]. An Analysis on the Architecture and the Size of Quantized Hardware Neural Networks Based on Memristors. Electronics 2021, 10, 3141. [https://doi.org/10.3390/electronics10243141]es_ES
dc.identifier.urihttp://hdl.handle.net/10481/72221
dc.descriptionThe authors acknowledge financial support by the German Research Foundation (DFG) under Project 434434223-SFB1461, by the Federal Ministry of Education and Research of Germany under Grant 16ME0092, Consejería de Conocimiento, Investigación y Universidad, Junta de Andalucía (Spain) and European Regional Development Fund (ERDF) under projects A-TIC-117-UGR18, B-TIC- 624-UGR20 and IE2017-5414, as well as the Spanish Ministry of Science, Innovation and Universities and ERDF fund under projects RTI2018-098983-B-I00 and TEC2017-84321-C4-3-R.es_ES
dc.description.abstractWe have performed different simulation experiments in relation to hardware neural networks (NN) to analyze the role of the number of synapses for different NN architectures in the network accuracy, considering different datasets. A technology that stands upon 4-kbit 1T1R ReRAM arrays, where resistive switching devices based on HfO2 dielectrics are employed, is taken as a reference. In our study, fully dense (FdNN) and convolutional neural networks (CNN) were considered, where the NN size in terms of the number of synapses and of hidden layer neurons were varied. CNNs work better when the number of synapses to be used is limited. If quantized synaptic weights are included, we observed thatNNaccuracy decreases significantly as the number of synapses is reduced; in this respect, a trade-off between the number of synapses and the NN accuracy has to be achieved. Consequently, the CNN architecture must be carefully designed; in particular, it was noticed that different datasets need specific architectures according to their complexity to achieve good results. It was shown that due to the number of variables that can be changed in the optimization of a NN hardware implementation, a specific solution has to be worked in each case in terms of synaptic weight levels, NN architecture, etc.es_ES
dc.description.sponsorshipGerman Research Foundation (DFG) under Project 434434223-SFB1461es_ES
dc.description.sponsorshipFederal Ministry of Education and Research of Germany under Grant 16ME0092es_ES
dc.description.sponsorshipConsejería de Conocimiento, Investigación y Universidad, Junta de Andalucía (Spain) and European Regional Development Fund (ERDF) under projects A-TIC-117-UGR18, B-TIC- 624-UGR20 and IE2017-5414es_ES
dc.description.sponsorshipSpanish Ministry of Science, Innovation and Universities and ERDF fund under projects RTI2018-098983-B-I00 and TEC2017-84321-C4-3-Res_ES
dc.language.isoenges_ES
dc.publisherMDPIes_ES
dc.rightsAtribución 3.0 España*
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/es/*
dc.subjectMemristores_ES
dc.subjectMultilevel operationes_ES
dc.subjectHardware neural networkes_ES
dc.subjectDeep neural network (DNN)es_ES
dc.subjectConvolutional neural network (CNN)es_ES
dc.subjectNetwork architecturees_ES
dc.subjectSynaptic weightes_ES
dc.titleAn Analysis on the Architecture and the Size of Quantized Hardware Neural Networks Based on Memristorses_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
dc.identifier.doi10.3390/electronics10243141
dc.type.hasVersioninfo:eu-repo/semantics/publishedVersiones_ES


Files in this item

[PDF]

This item appears in the following Collection(s)

Show simple item record

Atribución 3.0 España
Except where otherwise noted, this item's license is described as Atribución 3.0 España