Mostrar el registro sencillo del ítem

dc.contributor.authorTrillo Vílchez, José Ramón 
dc.contributor.authorGonzález-López, Felipe
dc.contributor.authorMorente-Molinera, Juan Antonio 
dc.contributor.authorMagán-Carrión, Roberto 
dc.contributor.authorGarcía Sánchez, Pablo 
dc.date.accessioned2025-09-11T11:22:39Z
dc.date.available2025-09-11T11:22:39Z
dc.date.issued2025-07-31
dc.identifier.citationTrillo, J.R.; GonzálezLópez, F.; Morente-Molinera, J.A.; Magán-Carrión, R.; García-Sánchez, P. Evaluation of Explainable, Interpretable and Non-Interpretable Algorithms for Cyber Threat Detection. Electronics 2025, 14, 3073. https://doi.org/10.3390/electronics14153073es_ES
dc.identifier.urihttps://hdl.handle.net/10481/106262
dc.description.abstractAs anonymity-enabling technologies such as VPNs and proxies become increasingly exploited for malicious purposes, detecting traffic associated with such services emerges as a critical first step in anticipating potential cyber threats. This study analyses a network traffic dataset focused on anonymised IP addresses—not direct attacks—to evaluate and compare explainable, interpretable, and opaque machine learning models. Through advanced preprocessing and feature engineering, we examine the trade-off between model performance and transparency in the early detection of suspicious connections. We evaluate explainable ML-based models such as k-nearest neighbours, fuzzy algorithms, decision trees, and random forests, alongside interpretable models like naïve Bayes, support vector machines, and non-interpretable algorithms such as neural networks. Results show that neural networks achieve the highest performance, with a macro F1-score of 0.8786, but explainable models like HFER offer strong performance (macro F1-score = 0.6106) with greater interpretability. The choice of algorithm depends on project-specific needs: neural networks excel in accuracy, while explainable algorithms are preferred for resource efficiency and transparency, as stated in this work. This work underscores the importance of aligning cybersecurity strategies with operational requirements, providing insights into balancing performance with interpretability.es_ES
dc.description.sponsorshipMICIU/AEI/10.13039/501100011033 and ERDF/EU (projects PID2022-139297OB-I00 and PID2023-147409NBC21)es_ES
dc.description.sponsorshipRegional Ministry of University, Research and Innovation and the European Union under the Andalusia ERDF Program 2021-2027 (projects C-ING-165-UGR23, C-ING-027-UGR23 and C-ING-300-UGR23)es_ES
dc.language.isoenges_ES
dc.publisherMDPIes_ES
dc.rightsAtribución 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectCybersecurityes_ES
dc.subjectExplainabilityes_ES
dc.subjectInterpretabilityes_ES
dc.titleEvaluation of Explainable, Interpretable and Non-Interpretable Algorithms for Cyber Threat Detectiones_ES
dc.typejournal articlees_ES
dc.rights.accessRightsopen accesses_ES
dc.identifier.doi10.3390/electronics14153073
dc.type.hasVersionVoRes_ES


Ficheros en el ítem

[PDF]

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Atribución 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como Atribución 4.0 Internacional