Evaluation of Explainable, Interpretable and Non-Interpretable Algorithms for Cyber Threat Detection
Metadatos
Mostrar el registro completo del ítemAutor
Trillo Vílchez, José Ramón; González-López, Felipe; Morente-Molinera, Juan Antonio; Magán-Carrión, Roberto; García Sánchez, PabloEditorial
MDPI
Materia
Cybersecurity Explainability Interpretability
Fecha
2025-07-31Referencia bibliográfica
Trillo, J.R.; GonzálezLópez, F.; Morente-Molinera, J.A.; Magán-Carrión, R.; García-Sánchez, P. Evaluation of Explainable, Interpretable and Non-Interpretable Algorithms for Cyber Threat Detection. Electronics 2025, 14, 3073. https://doi.org/10.3390/electronics14153073
Patrocinador
MICIU/AEI/10.13039/501100011033 and ERDF/EU (projects PID2022-139297OB-I00 and PID2023-147409NBC21); Regional Ministry of University, Research and Innovation and the European Union under the Andalusia ERDF Program 2021-2027 (projects C-ING-165-UGR23, C-ING-027-UGR23 and C-ING-300-UGR23)Resumen
As anonymity-enabling technologies such as VPNs and proxies become increasingly exploited for malicious purposes, detecting traffic associated with such services emerges as
a critical first step in anticipating potential cyber threats. This study analyses a network
traffic dataset focused on anonymised IP addresses—not direct attacks—to evaluate and
compare explainable, interpretable, and opaque machine learning models. Through advanced preprocessing and feature engineering, we examine the trade-off between model
performance and transparency in the early detection of suspicious connections. We evaluate
explainable ML-based models such as k-nearest neighbours, fuzzy algorithms, decision
trees, and random forests, alongside interpretable models like naïve Bayes, support vector
machines, and non-interpretable algorithms such as neural networks. Results show that
neural networks achieve the highest performance, with a macro F1-score of 0.8786, but
explainable models like HFER offer strong performance (macro F1-score = 0.6106) with
greater interpretability. The choice of algorithm depends on project-specific needs: neural
networks excel in accuracy, while explainable algorithms are preferred for resource efficiency and transparency, as stated in this work. This work underscores the importance of
aligning cybersecurity strategies with operational requirements, providing insights into
balancing performance with interpretability.





