An Analysis on the Architecture and the Size of Quantized Hardware Neural Networks Based on Memristors
MetadataShow full item record
AuthorRomero Zaliz, Rocio Celeste; Cantudo, Antonio; Jiménez Molinos, Francisco; Roldán Aranda, Juan Bautista
MemristorMultilevel operationHardware neural networkDeep neural network (DNN)Convolutional neural network (CNN)Network architectureSynaptic weight
Romero-Zaliz, R... [et al.]. An Analysis on the Architecture and the Size of Quantized Hardware Neural Networks Based on Memristors. Electronics 2021, 10, 3141. [https://doi.org/10.3390/electronics10243141]
SponsorshipGerman Research Foundation (DFG) under Project 434434223-SFB1461; Federal Ministry of Education and Research of Germany under Grant 16ME0092; Consejería de Conocimiento, Investigación y Universidad, Junta de Andalucía (Spain) and European Regional Development Fund (ERDF) under projects A-TIC-117-UGR18, B-TIC- 624-UGR20 and IE2017-5414; Spanish Ministry of Science, Innovation and Universities and ERDF fund under projects RTI2018-098983-B-I00 and TEC2017-84321-C4-3-R
We have performed different simulation experiments in relation to hardware neural networks (NN) to analyze the role of the number of synapses for different NN architectures in the network accuracy, considering different datasets. A technology that stands upon 4-kbit 1T1R ReRAM arrays, where resistive switching devices based on HfO2 dielectrics are employed, is taken as a reference. In our study, fully dense (FdNN) and convolutional neural networks (CNN) were considered, where the NN size in terms of the number of synapses and of hidden layer neurons were varied. CNNs work better when the number of synapses to be used is limited. If quantized synaptic weights are included, we observed thatNNaccuracy decreases significantly as the number of synapses is reduced; in this respect, a trade-off between the number of synapses and the NN accuracy has to be achieved. Consequently, the CNN architecture must be carefully designed; in particular, it was noticed that different datasets need specific architectures according to their complexity to achieve good results. It was shown that due to the number of variables that can be changed in the optimization of a NN hardware implementation, a specific solution has to be worked in each case in terms of synaptic weight levels, NN architecture, etc.
Showing items related by title, author, creator and subject.