Evaluation of Explainable, Interpretable and Non-Interpretable Algorithms for Cyber Threat Detection Trillo Vílchez, José Ramón González-López, Felipe Morente-Molinera, Juan Antonio Magán-Carrión, Roberto García Sánchez, Pablo Cybersecurity Explainability Interpretability As anonymity-enabling technologies such as VPNs and proxies become increasingly exploited for malicious purposes, detecting traffic associated with such services emerges as a critical first step in anticipating potential cyber threats. This study analyses a network traffic dataset focused on anonymised IP addresses—not direct attacks—to evaluate and compare explainable, interpretable, and opaque machine learning models. Through advanced preprocessing and feature engineering, we examine the trade-off between model performance and transparency in the early detection of suspicious connections. We evaluate explainable ML-based models such as k-nearest neighbours, fuzzy algorithms, decision trees, and random forests, alongside interpretable models like naïve Bayes, support vector machines, and non-interpretable algorithms such as neural networks. Results show that neural networks achieve the highest performance, with a macro F1-score of 0.8786, but explainable models like HFER offer strong performance (macro F1-score = 0.6106) with greater interpretability. The choice of algorithm depends on project-specific needs: neural networks excel in accuracy, while explainable algorithms are preferred for resource efficiency and transparency, as stated in this work. This work underscores the importance of aligning cybersecurity strategies with operational requirements, providing insights into balancing performance with interpretability. 2025-09-11T11:22:39Z 2025-09-11T11:22:39Z 2025-07-31 journal article Trillo, J.R.; GonzálezLópez, F.; Morente-Molinera, J.A.; Magán-Carrión, R.; García-Sánchez, P. Evaluation of Explainable, Interpretable and Non-Interpretable Algorithms for Cyber Threat Detection. Electronics 2025, 14, 3073. https://doi.org/10.3390/electronics14153073 https://hdl.handle.net/10481/106262 10.3390/electronics14153073 eng http://creativecommons.org/licenses/by/4.0/ open access Atribución 4.0 Internacional MDPI