Mostrar el registro sencillo del ítem

dc.contributor.authorPialla, Gautier
dc.contributor.authorBergmeir, Christoph Norbert
dc.date.accessioned2023-11-29T08:40:11Z
dc.date.available2023-11-29T08:40:11Z
dc.date.issued2023-10-24
dc.identifier.citationPialla, G., Ismail Fawaz, H., Devanne, M. et al. Time series adversarial attacks: an investigation of smooth perturbations and defense approaches. Int J Data Sci Anal (2023). [https://doi.org/10.1007/s41060-023-00438-0]es_ES
dc.identifier.urihttps://hdl.handle.net/10481/85913
dc.descriptionOpen Access funding enabled and organized by CAUL and its Member Institutions. This work was funded by ArtIC project “Artificial Intelligence for Care” (Grant ANR-20-THIA-0006-01) and co-funded by Région Grand Est, Inria Nancy - Grand Est, IHU of Strasbourg, University of Strasbourg and the University of Haute-Alsacees_ES
dc.description.abstractAdversarial attacks represent a threat to every deep neural network. They are particularly effective if they can perturb a given model while remaining undetectable. They have been initially introduced for image classifiers, and are well studied for this task. For time series, few attacks have yet been proposed. Most that have are adaptations of attacks previously proposed for image classifiers. Although these attacks are effective, they generate perturbations containing clearly discernible patterns such as sawtooth and spikes. Adversarial patterns are not perceptible on images, but the attacks proposed to date are readily perceptible in the case of time series. In order to generate stealthier adversarial attacks for time series, we propose a new attack that produces smoother perturbations. We introduced a function to measure the smoothness for time series. Using it, we find that smooth perturbations are harder to detect both visually, by the naked eye and by deep learning models. We also show two ways of protection against adversarial attacks: the first one by detecting the attacks using a deep model; the second one by using adversarial training to improve the robustness of a model against a specific attack, thus making it less vulnerable.es_ES
dc.description.sponsorshipOpen Access funding enabled and organized by CAUL and its Member Institutionses_ES
dc.description.sponsorshipArtIC project “Artificial Intelligence for Care” (Grant ANR-20-THIA-0006-01)es_ES
dc.description.sponsorshipCo-funded by Région Grand Est, Inria Nancy - Grand Est, IHU of Strasbourg, University of Strasbourg and the University of Haute-Alsacees_ES
dc.language.isoenges_ES
dc.publisherSpringer Naturees_ES
dc.rightsAtribución 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectTime serieses_ES
dc.subjectAdversarial attackes_ES
dc.subjectSmooth perturbationses_ES
dc.subjectInceptionTimees_ES
dc.subjectBIMes_ES
dc.titleTime series adversarial attacks: an investigation of smooth perturbations and defense approacheses_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
dc.identifier.doi10.1007/s41060-023-00438-0
dc.type.hasVersioninfo:eu-repo/semantics/publishedVersiones_ES


Ficheros en el ítem

[PDF]

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Atribución 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como Atribución 4.0 Internacional