Quality In / Quality Out: Data quality more relevant than model choice in anomaly detection with the UGR’16
Identificadores
URI: https://hdl.handle.net/10481/81203Metadatos
Afficher la notice complèteEditorial
NOMS 2023-2023 IEEE/IFIP Network Operations and Management Symposium
Materia
Netflow UGR’16 anomaly detection data quality
Date
2023Patrocinador
This work was supported by the Agencia Estatal de Investigaci´on in Spain, grant No PID2020-113462RB-I00, and the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 893146.Résumé
Autonomous or self-driving networks are expected
to provide a solution to the myriad of extremely demanding new
applications in the Future Internet. The key to handle complexity
is to perform tasks like network optimization and failure recovery
with minimal human supervision. For this purpose, the community
relies on the development of new Machine Learning (ML)
models and techniques. However, ML can only be as good as
the data it is fitted with. Datasets provided to the community
as benchmarks for research purposes, which have a relevant
impact in research findings and directions, are often assumed
to be of good quality by default. In this paper, we show that
relatively minor modifications on the same benchmark dataset
(UGR’16, a flow-based real-traffic dataset for anomaly detection)
cause significantly more impact on model performance than the
specific ML technique considered. To understand this finding,
we contribute a methodology to investigate the root causes for
those differences, and to assess the quality of the data labelling.
Our findings illustrate the need to devote more attention into
(automatic) data quality assessment and optimization techniques
in the context of autonomous networks.