Optimization of Multi-Level Operation in RRAM Arrays for In-Memory Computing
Metadatos
Afficher la notice complèteAuteur
Pérez Ávila, Antonio Javier; Romero Zaliz, Rocio Celeste; Roldán Aranda, Juan Bautista; Jiménez Molinos, FranciscoEditorial
MDPI
Materia
RRAM arrays Programming algorithm Multi-level Inter-levels switching In-memory computing Vector-matrix-multiplication
Date
2021Referencia bibliográfica
Pérez, E.; Pérez-Ávila, A.J.; Romero-Zaliz, R.; Mahadevaiah, M.K.; Pérez-Bosch Quesada, E.; Roldán, J.B.; Jiménez-Molinos, F.; Wenger, C. Optimization of Multi-Level Operation in RRAM Arrays for In-Memory Computing. Electronics 2021, 10, 1084. https:// doi.org/10.3390/electronics10091084
Patrocinador
German Research Foundation (DFG) - FOR2093; Government of Andalusia (Spain) and the FEDER program in the frame of the project A.TIC.117.UGR18; Open Access Fund of the Leibniz AssociationRésumé
Accomplishing multi-level programming in resistive random access memory (RRAM)
arrays with truly discrete and linearly spaced conductive levels is crucial in order to implement
synaptic weights in hardware-based neuromorphic systems. In this paper, we implemented this
feature on 4-kbit 1T1R RRAM arrays by tuning the programming parameters of the multi-level
incremental step pulse with verify algorithm (M-ISPVA). The optimized set of parameters was
assessed by comparing its results with a non-optimized one. The optimized set of parameters proved
to be an effective way to define non-overlapped conductive levels due to the strong reduction of
the device-to-device variability as well as of the cycle-to-cycle variability, assessed by inter-levels
switching tests and during 1k reset-set cycles. In order to evaluate this improvement in real scenarios,
the experimental characteristics of the RRAM devices were captured by means of a behavioral
model, which was used to simulate two different neuromorphic systems: an 8×8 vector-matrixmultiplication (VMM) accelerator and a 4-layer feedforward neural network for MNIST database
recognition. The results clearly showed that the optimization of the programming parameters
improved both the precision of VMM results as well as the recognition accuracy of the neural
network in about 6% compared with the use of non-optimized parameters.