Digital implementation of Radial Basis Function Neural Networks based on Stochastic Computing
Metadata
Show full item recordAuthor
Morán, Alejandro; Parrilla Roure, Luis; Roca, Miquel; Font-Rossello, Joan; Isern, Eugeni; Canals, VincentEditorial
IEEE
Materia
Field programmable gate array (FPGA) K-mean Pattern recognition
Date
2022-12-22Referencia bibliográfica
Publisherd version: A. Morán, L. Parrilla, M. Roca, J. Font-Rossello, E. Isern and V. Canals. Digital Implementation of Radial Basis Function Neural Networks Based on Stochastic Computing, in IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol. 13, no. 1, pp. 257-269, March 2023, doi: 10.1109/JETCAS.2022.3231708
Sponsorship
Spanish Ministry of Science and Innovation, PID2020-120075RBI00; Consejería de Economía, Conocimiento, Empresas y Universidad, Junta de Andalucía (Spain), PDC2021-121847-I00; European Regional European Development Founds, (ERDF) B-TIC-588-UGR20Abstract
Nowadays Internet of Things (IoT) and mobile systems use more and more Machine Learning based solutions, which implies a high computation cost with a low energy consumption. This is causing a revival of interest in unconventional hardware computing methods capable of implementing both linear and nonlinear functions with less hardware overhead than conventional fixed point and floating point alternatives. Particularly, this work proposes a novel Radial Basis Function Neural Network (RBF-NN) hardware implementation based on Stochastic Computing (SC), which applies probabilistic laws over conventional digital gates. Several complex functions design to implement RBF-NN are presented and theoretically analyzed, such as the squared Euclidean distance and the stochastic Gaussian kernel similarity function between input samples and prototypes. The efficiency and performance of the methodology is tested over well-known pattern recognition tasks, including the MNIST dataset. The results show a low-cost methodology in terms of logic resources and power, along with an inherent capability to implement complex functions in a simple way. This methodology enables the implementation of massively parallel large scale RBF-NN with relatively low hardware requirements while maintaining 96.20% accuracy, which is nearly the same for the floating point and fixed point models (96.4% and 96.25%, respectively).





