Study of Quantized Hardware Deep Neural Networks Based on Resistive Switching Devices, Conventional versus Convolutional Approaches
MetadataShow full item record
MemristorMultilevel operationHardware neural networkDeep neural networksConvolutional neural networksImage recognition
Romero-Zaliz, R.; Pérez, E.; Jimenez-Molinos, F.; Wenger, C.; Roldán, J.B. Study of Quantized Hardware Deep Neural Networks Based on Resistive Switching Devices, Conventional versus Convolutional Approaches. Electronics 2021, 10, 346. https://doi.org/10.3390/electronics10030346
SponsorshipGerman Research Foundation (DFG) in the frame of research group FOR2093; Spanish Ministry of Science and the FEDER program through projects TEC2017-84321-C4-3-R; Consejería de Conocimiento, Investigación y Universidad, Junta de Andalucía and European Regional Development Fund (ERDF) under projects A-TIC-117- UGR18; Spanish Ministry of Science, Innovation and Universities under project RTI2018-098983-B-I00
A comprehensive analysis of two types of artificial neural networks (ANN) is performed to assess the influence of quantization on the synaptic weights. Conventional multilayer-perceptron (MLP) and convolutional neural networks (CNN) have been considered by changing their features in the training and inference contexts, such as number of levels in the quantization process, the number of hidden layers on the network topology, the number of neurons per hidden layer, the image databases, the number of convolutional layers, etc. A reference technology based on 1T1R structures with bipolar memristors including H f O2 dielectrics was employed, accounting for different multilevel schemes and the corresponding conductance quantization algorithms. The accuracy of the image recognition processes was studied in depth. This type of studies are essential prior to hardware implementation of neural networks. The obtained results support the use of CNNs for image domains. This is linked to the role played by convolutional layers at extracting image features and reducing the data complexity. In this case, the number of synaptic weights can be reduced in comparison to MLPs.