Optimizing Convolutional Neural Network Architectures Balderas, Luis Lastra Leidinger, Miguel Benitez, José María convolutional neural network simplification neural network pruning efficient machine learning Convolutional neural networks (CNNs) are commonly employed for demanding applications, such as speech recognition, natural language processing, and computer vision. As CNN architectures become more complex, their computational demands grow, leading to substantial energy consumption and complicating their use on devices with limited resources (e.g., edge devices). Furthermore, a new line of research seeking more sustainable approaches to Artificial Intelligence development and research is increasingly drawing attention: Green AI. Motivated by an interest in optimizing Machine Learning models, in this paper, we propose Optimizing Convolutional Neural Network Architectures (OCNNA). It is a novel CNN optimization and construction method based on pruning designed to establish the importance of convolutional layers. The proposal was evaluated through a thorough empirical study including the best known datasets (CIFAR-10, CIFAR-100, and Imagenet) and CNN architectures (VGG-16, ResNet-50, DenseNet-40, and MobileNet), setting accuracy drop and the remaining parameters ratio as objective metrics to compare the performance of OCNNA with the other state-of-the-art approaches. Our method was compared with more than 20 convolutional neural network simplification algorithms, obtaining outstanding results. As a result, OCNNA is a competitive CNN construction method which could ease the deployment of neural networks on the IoT or resource-limited devices. 2024-11-03T21:07:50Z 2024-11-03T21:07:50Z 2024-09-28 journal article Balderas, L.; Lastra, M.; Benítez, J.M. Mathematics 2024, 12, 3032. [https://doi.org/10.3390/math12193032] https://hdl.handle.net/10481/96553 10.3390/math12193032 eng http://creativecommons.org/licenses/by/4.0/ open access Atribución 4.0 Internacional MDPI