dc.contributor.author | Camacho Páez, José | |
dc.contributor.author | Wasielewska, Katarzyna | |
dc.contributor.author | Espinosa, Pablo | |
dc.contributor.author | Fuentes García, Raquel María | |
dc.date.accessioned | 2023-04-24T07:45:59Z | |
dc.date.available | 2023-04-24T07:45:59Z | |
dc.date.issued | 2023 | |
dc.identifier.uri | https://hdl.handle.net/10481/81203 | |
dc.description.abstract | Autonomous or self-driving networks are expected
to provide a solution to the myriad of extremely demanding new
applications in the Future Internet. The key to handle complexity
is to perform tasks like network optimization and failure recovery
with minimal human supervision. For this purpose, the community
relies on the development of new Machine Learning (ML)
models and techniques. However, ML can only be as good as
the data it is fitted with. Datasets provided to the community
as benchmarks for research purposes, which have a relevant
impact in research findings and directions, are often assumed
to be of good quality by default. In this paper, we show that
relatively minor modifications on the same benchmark dataset
(UGR’16, a flow-based real-traffic dataset for anomaly detection)
cause significantly more impact on model performance than the
specific ML technique considered. To understand this finding,
we contribute a methodology to investigate the root causes for
those differences, and to assess the quality of the data labelling.
Our findings illustrate the need to devote more attention into
(automatic) data quality assessment and optimization techniques
in the context of autonomous networks. | es_ES |
dc.description.sponsorship | This work was supported by the Agencia Estatal de Investigaci´on in Spain,
grant No PID2020-113462RB-I00, and the European Union’s Horizon 2020
research and innovation programme under the Marie Skłodowska-Curie grant
agreement No 893146. | es_ES |
dc.language.iso | eng | es_ES |
dc.publisher | NOMS 2023-2023 IEEE/IFIP Network Operations and Management Symposium | es_ES |
dc.rights | Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License | en_EN |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/3.0/ | en_EN |
dc.subject | Netflow | es_ES |
dc.subject | UGR’16 | es_ES |
dc.subject | anomaly detection | es_ES |
dc.subject | data quality | es_ES |
dc.title | Quality In / Quality Out: Data quality more relevant than model choice in anomaly detection with the UGR’16 | es_ES |
dc.type | conference output | es_ES |
dc.rights.accessRights | open access | es_ES |
dc.type.hasVersion | SMUR | es_ES |