A Green AI Methodology Based on Persistent Homology for Compressing BERT Balderas Ruíz, Luis Lastra Leidinger, Miguel Benítez Sánchez, José Manuel BERT compression Green AI persistent homology Large Language Models (LLMs) like BERT have gained significant prominence due to their remarkable performance in various natural language processing tasks. However, they come with substantial computational and memory costs. Additionally, they are essentially black-box models, being challenging to explain and interpret. In this article, Persistent BERT Compression and Explainability (PBCE) is proposed, a Green AI methodology to prune BERT models using persistent homology, aiming to measure the importance of each neuron by studying the topological characteristics of their outputs. As a result, PBCE can compress BERT significantly by reducing the number of parameters (47% of the original parameters for BERT Base, 42% for BERT Large). The proposed methodology has been evaluated on the standard GLUE Benchmark, comparing the results with state-of-the-art techniques achieving outstanding results. Consequently, PBCE can simplify the BERT model by providing explainability to its neurons and reducing the model’s size, making it more suitable for deployment on resource-constrained devices. 2025-01-07T11:03:14Z 2025-01-07T11:03:14Z 2025-01-03 journal article Balderas Ruíz, L. & Lastra Leidinger, M. & Benítez Sánchez, J.M. Appl. Sci. 2025, 15, 390 [https://doi.org/10.3390/app15010390] https://hdl.handle.net/10481/98483 10.3390/app15010390 eng http://creativecommons.org/licenses/by/4.0/ open access Atribución 4.0 Internacional MDPI