Power-efficient implementation of ternary neural networks in edge devices Molina, Manuel Méndez, Javier Morales Santos, Diego Pedro Castillo Morales, María Encarnación López Vallejo, Marisa Pegalajar Cuéllar, Manuel There is a growing interest in pushing computation to the edge, especially the problem-solving abilities of artificial neural networks (ANNs). This article presents a simplified method to obtain a ternary neural network based on the multilayer perceptron. The method is focused on resource-constrained devices, where memory, computing power, and battery are some of the most relevant constraints. A dynamic threshold is estimated to perform ternarization, and a new pruning technique is proposed to obtain a drastic reduction in the ANN’s size, with the corresponding decrease in resource utilization and power consumption of the resulting hardware. In addition, a support framework has been developed to automate hardware design exploration and generation from the network trained in software. Experimental results show that the proposed method and architecture, when implemented in a field-programmable gate array (FPGA), provide excellent figures in power (0.11–0.13 W) and efficiency (1225–1448 kfps/W) with respect to state of the art, being its efficiency double than the maximum one reported previously. 2025-01-30T07:58:13Z 2025-01-30T07:58:13Z 2022-05-05 journal article M. Molina, J. Mendez, D. P. Morales, E. Castillo, M. L. Vallejo and M. Pegalajar, "Power-Efficient Implementation of Ternary Neural Networks in Edge Devices," in IEEE Internet of Things Journal, vol. 9, no. 20, pp. 20111-20121, 15 Oct.15, 2022, doi: 10.1109/JIOT.2022.3172843 https://hdl.handle.net/10481/101050 10.1109/JIOT.2022.3172843 eng http://creativecommons.org/licenses/by-nc-nd/3.0/ open access Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License IEEE