pith. machine review for the scientific record. sign in

arxiv: 2605.07690 · v1 · submitted 2026-05-08 · 💻 cs.LG

Recognition: 2 theorem links

· Lean Theorem

Fortifying Time Series: DTW-Certified Robust Anomaly Detection

Authors on Pith no claims yet

Pith reviewed 2026-05-11 03:14 UTC · model grok-4.3

classification 💻 cs.LG
keywords time-series anomaly detectionDynamic Time Warpingcertified robustnessrandomized smoothingadversarial attackstemporal data
0
0 comments X

The pith

Time-series anomaly detection gains its first certified robustness under Dynamic Time Warping by adapting randomized smoothing with a lower-bound transformation from l_p norms.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper aims to create certifiably robust anomaly detectors for time series that hold up against adversarial attacks respecting temporal alignments, as captured by Dynamic Time Warping distance. Standard certified defenses rely on l_p norms that treat each time point independently and ignore shifts or stretches in the series, making them unsuitable for this domain. By transforming l_p perturbations into a lower bound on DTW distance, randomized smoothing can be applied to yield guarantees in the DTW metric. A sympathetic reader would care because time series data powers safety-critical systems where small timing manipulations could cause undetected anomalies or false alarms with serious consequences.

Core claim

We introduce the first DTW-certified robust defense in time-series anomaly detection by adapting the randomized smoothing paradigm. We develop this certificate by bridging the l_p-norm to DTW distance through a lower-bound transformation. Extensive experiments across various datasets and models validate the effectiveness and practicality of our theoretical approach.

What carries the argument

The lower-bound transformation from l_p-norm perturbations to Dynamic Time Warping (DTW) distances that enables the construction of certified radii for randomized smoothing in the DTW metric.

If this is right

  • Certified radii can now be provided for time-series anomaly models against DTW-based attacks rather than only l_p attacks.
  • Models achieve improved robustness, with up to 18.7% higher F1-scores under DTW adversarial attacks compared to traditional certified models.
  • The defense applies to various time-series datasets and underlying detection models.
  • Practical computation of certificates is possible without optimizing directly in the DTW space.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • This method could be adapted for other temporal machine learning tasks such as classification or forecasting that rely on DTW similarity.
  • Future work might focus on deriving tighter lower bounds to increase the size of usable certified radii.
  • Combining this DTW certificate with domain-specific constraints on time series could further strengthen guarantees against realistic attacks.

Load-bearing premise

The lower-bound transformation from l_p-norm to DTW distance yields a sufficiently tight certificate for practical models and attack strengths.

What would settle it

A demonstration that a smoothed model can be fooled by a DTW perturbation whose size is within the certified radius computed via the l_p-to-DTW lower bound would falsify the certificate.

Figures

Figures reproduced from arXiv: 2605.07690 by Christopher Leckie, Sarah Erfani, Shijie Liu, Tansu Alpcan.

Figure 1
Figure 1. Figure 1: Comparison of standard, ℓp-norm-robust, and DTW-robust anomaly detectors under adversarial perturbations. DTW facilitates optimal temporal alignment, offering a more meaningful similarity measure for time-series data, thereby ensuring more comprehensive robustness guarantees against adversarial examples. ℓp-norms. As illustrated in [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Construct any anomaly detector with anomaly score function [PITH_FULL_IMAGE:figures/full_fig_p006_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Certified accuracy and certified F1-score as functions of the DTW perturbation threshold [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
read the original abstract

Time-series anomaly detection is critical for ensuring safety in high-stakes applications, where robustness is a fundamental requirement rather than a mere performance metric. Addressing the vulnerability of these systems to adversarial manipulation is therefore essential. Existing defenses are largely heuristic or provide certified robustness only under $\ell_p$-norm constraints, which are incompatible with time-series data. In particular, $\ell_p$-norm fails to capture the intrinsic temporal structure in time series, causing small temporal distortions to significantly alter the $\ell_p$-norm measures. Instead, the similarity metric \emph{Dynamic Time Warping} (DTW) is more suitable and widely adopted in the time-series domain, as DTW accounts for temporal alignment and remains robust to temporal variations. To date, however, there has been no certifiable robustness result in this metric that provides guarantees. In this work, we introduce the first \emph{DTW-certified robust defense} in time-series anomaly detection by adapting the randomized smoothing paradigm. We develop this certificate by bridging the $\ell_p$-norm to DTW distance through a lower-bound transformation. Extensive experiments across various datasets and models validate the effectiveness and practicality of our theoretical approach. Results demonstrate significantly improved performance, e.g., up to 18.7\% in F1-score under DTW-based adversarial attacks compared to traditional certified models.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper claims to introduce the first DTW-certified robust defense for time-series anomaly detection by adapting the randomized smoothing framework and bridging the ℓ_p-norm to DTW distance via a lower-bound transformation. It reports empirical gains of up to 18.7% F1-score under DTW-based adversarial attacks across datasets and models.

Significance. A sound, non-vacuous DTW certificate would address a genuine gap, since ℓ_p-norm certificates are known to be mismatched to temporal data and DTW is the de-facto similarity measure in the domain. The randomized-smoothing adaptation is a standard and reusable technique; if the lower-bound transformation yields usable radii and the experiments include certified-radius tables, the contribution would be solid.

major comments (2)
  1. [Abstract and §3] Abstract and §3 (theoretical construction): the central claim rests on a lower-bound transformation DTW(x,y) ≥ c · ||x-y||_p (or equivalent) that converts an ℓ_p randomized-smoothing certificate into a DTW certificate. No explicit form of the constant c, no derivation, and no tightness analysis are supplied, so it is impossible to determine whether the resulting DTW radius is non-trivial for realistic attack strengths.
  2. [§4] §4 (experiments): the reported F1 improvements are given only under DTW-based attacks; the paper must also report the actual certified DTW radii (or the effective c values) achieved by the smoothed models and compare them to the distances of the evaluated attacks. Without these numbers the practicality claim cannot be assessed.
minor comments (2)
  1. [§3] Notation for the lower-bound transformation should be introduced with a numbered equation rather than inline text.
  2. [§2] Clarify whether the base anomaly detector is retrained or only smoothed; the current description leaves this ambiguous.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments, which help clarify the presentation of our DTW-certified robustness results. We address each major point below and will revise the manuscript accordingly to strengthen both the theoretical exposition and the experimental evaluation.

read point-by-point responses
  1. Referee: [Abstract and §3] Abstract and §3 (theoretical construction): the central claim rests on a lower-bound transformation DTW(x,y) ≥ c · ||x-y||_p (or equivalent) that converts an ℓ_p randomized-smoothing certificate into a DTW certificate. No explicit form of the constant c, no derivation, and no tightness analysis are supplied, so it is impossible to determine whether the resulting DTW radius is non-trivial for realistic attack strengths.

    Authors: We agree that the explicit form of the constant c, its derivation, and a tightness analysis are necessary for readers to evaluate the strength of the resulting DTW certificate. The lower-bound transformation is introduced in §3, but the full derivation and explicit c were omitted from the initial submission. In the revision we will insert the complete derivation of DTW(x,y) ≥ c · ||x-y||_p (including the precise definition of c), together with a tightness discussion that quantifies how close the bound is to equality on representative time-series data. This addition will make it possible to assess whether the certified DTW radii remain non-vacuous for realistic attack strengths. revision: yes

  2. Referee: [§4] §4 (experiments): the reported F1 improvements are given only under DTW-based attacks; the paper must also report the actual certified DTW radii (or the effective c values) achieved by the smoothed models and compare them to the distances of the evaluated attacks. Without these numbers the practicality claim cannot be assessed.

    Authors: We concur that reporting the certified DTW radii and effective c values is required to substantiate the practicality of the defense. The current experiments focus on F1-score gains under DTW attacks, but do not tabulate the corresponding certified radii. In the revised manuscript we will add tables (or supplementary figures) that list, for each dataset and model, the certified DTW radius obtained via the lower-bound transformation, the effective c realized by the smoothed classifier, and a direct comparison against the DTW distances of the adversarial examples used in the attack evaluation. These numbers will be placed alongside the existing F1 results so that readers can judge both empirical improvement and the magnitude of the certified guarantee. revision: yes

Circularity Check

0 steps flagged

No significant circularity; derivation adapts randomized smoothing via independent lower-bound transformation

full rationale

The paper's central derivation adapts the established randomized-smoothing paradigm (an external framework) to produce DTW certificates by introducing a lower-bound transformation that relates ℓ_p-norm to DTW distance. This bridging step is presented as a novel contribution developed within the work rather than a quantity fitted to the target result or defined in terms of itself. No load-bearing self-citations, uniqueness theorems imported from the authors' prior work, or ansatzes smuggled via citation are indicated in the abstract or described claims. The certificate is not shown to reduce by construction to its inputs; the transformation is an additional mathematical step whose tightness is a separate empirical question. The derivation chain therefore remains self-contained against external benchmarks and does not exhibit any of the enumerated circularity patterns.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The central claim depends on the validity of the lower-bound transformation between l_p and DTW distances and on the standard assumptions of randomized smoothing; these are not expanded in the abstract.

axioms (2)
  • standard math Randomized smoothing yields certified robustness radii under l_p norms for the base classifier
    Standard technique being adapted; invoked implicitly when the authors say they adapt the paradigm.
  • domain assumption A lower-bound transformation exists that relates l_p distance to DTW distance in a way that preserves useful certified radii
    This is the key bridging step stated in the abstract but not derived or justified here.

pith-pipeline@v0.9.0 · 5545 in / 1278 out tokens · 45590 ms · 2026-05-11T03:14:56.890346+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

82 extracted references · 82 canonical work pages · 1 internal anchor

  1. [1]

    Threat of adversarial attacks on deep learning in computer vision: A survey.IEEE Access, 6:14410–14430, 2018

    Naveed Akhtar and Ajmal Mian. Threat of adversarial attacks on deep learning in computer vision: A survey.IEEE Access, 6:14410–14430, 2018

  2. [2]

    Advances in adversarial attacks and defenses in computer vision: A survey.IEEE Access, 9:155161–155196, 2021

    Naveed Akhtar, Ajmal Mian, Navid Kardan, and Mubarak Shah. Advances in adversarial attacks and defenses in computer vision: A survey.IEEE Access, 9:155161–155196, 2021

  3. [3]

    Are labels required for improving adversarial robustness?Advances in Neural Information Processing Systems, 32, 2019

    Jean-Baptiste Alayrac, Jonathan Uesato, Po-Sen Huang, Alhussein Fawzi, Robert Stanforth, and Pushmeet Kohli. Are labels required for improving adversarial robustness?Advances in Neural Information Processing Systems, 32, 2019

  4. [4]

    Julien Audibert, Pietro Michiardi, Frédéric Guyard, Sébastien Marti, and Maria A. Zuluaga. Usad: Unsupervised anomaly detection on multivariate time series. InProceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’20, page 3395–3404, New York, NY , USA, 2020. Association for Computing Machinery

  5. [5]

    Adversarial Framework With Certified Robustness for Time-Series Domain via Statistical Features.Journal of Artificial Intelligence Research, 73:1435–1471, apr 2022

    Taha Belkhouja and Janardhan Rao Doppa. Adversarial Framework With Certified Robustness for Time-Series Domain via Statistical Features.Journal of Artificial Intelligence Research, 73:1435–1471, apr 2022. arXiv:2207.04307 [cs]

  6. [6]

    Dynamic time warping based adversarial framework for time-series domain.IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(6):7353–7366, 2022

    Taha Belkhouja, Yan Yan, and Janardhan Rao Doppa. Dynamic time warping based adversarial framework for time-series domain.IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(6):7353–7366, 2022

  7. [7]

    A review on out- lier/anomaly detection in time series data.ACM computing surveys (CSUR), 54(3):1–33, 2021

    Ane Blázquez-García, Angel Conde, Usue Mori, and Jose A Lozano. A review on out- lier/anomaly detection in time series data.ACM computing surveys (CSUR), 54(3):1–33, 2021

  8. [8]

    Adversarially robust industrial anomaly detection through diffusion model.arXiv preprint arXiv:2408.04839, 2024

    Yuanpu Cao, Lu Lin, and Jinghui Chen. Adversarially robust industrial anomaly detection through diffusion model.arXiv preprint arXiv:2408.04839, 2024

  9. [9]

    On Evaluating Adversarial Robustness

    Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian Goodfellow, Aleksander Madry, and Alexey Kurakin. On evaluating adversarial robustness.arXiv preprint arXiv:1902.06705, 2019

  10. [10]

    Towards Evaluating the Robustness of Neural Networks

    Nicholas Carlini and David Wagner. Towards Evaluating the Robustness of Neural Networks. In2017 IEEE Symposium on security and privacy (sp), pages 39–57. IEEE, 2017

  11. [11]

    Adversarial Attacks and Defences: A Survey

    Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. Adversarial attacks and defences: A survey.arXiv preprint arXiv:1810.00069, 2018

  12. [12]

    Academic Press, 2022

    Pin-Yu Chen and Cho-Jui Hsieh.Adversarial robustness for machine learning. Academic Press, 2022

  13. [13]

    Deep variational graph convolutional recurrent network for multivariate time series anomaly detection

    Wenchao Chen, Long Tian, Bo Chen, Liang Dai, Zhibin Duan, and Mingyuan Zhou. Deep variational graph convolutional recurrent network for multivariate time series anomaly detection. InInternational conference on machine learning, pages 3621–3633. PMLR, 2022

  14. [14]

    Detection as regression: Certified object detection with median smoothing

    Ping-yeh Chiang, Michael Curry, Ahmed Abdelkader, Aounon Kumar, John Dickerson, and Tom Goldstein. Detection as regression: Certified object detection with median smoothing. Advances in Neural Information Processing Systems, 33:1275–1286, 2020

  15. [15]

    Curry, Ahmed Abdelkader, Aounon Kumar, John Dickerson, and Tom Goldstein

    Ping-yeh Chiang, Michael J. Curry, Ahmed Abdelkader, Aounon Kumar, John Dickerson, and Tom Goldstein. Detection as Regression: Certified Object Detection by Median Smoothing, feb

  16. [16]

    arXiv:2007.03730 [cs]. 10

  17. [17]

    Deep learning for anomaly detection in time-series data: Review, analysis, and guidelines.IEEE Access, 9:120043–120065, 2021

    Kukjin Choi, Jihun Yi, Changhwa Park, and Sungroh Yoon. Deep learning for anomaly detection in time-series data: Review, analysis, and guidelines.IEEE Access, 9:120043–120065, 2021

  18. [18]

    Certified adversarial robustness via randomized smoothing

    Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors,Proceedings of the 36th International Conference on Machine Learning, volume 97 ofProceedings of Machine Learning Research, pages 1310–1320. PMLR, 09–15 Jun 2019

  19. [19]

    Certified Adversarial Robustness via Randomized Smoothing

    Jeremy M. Cohen, Elan Rosenfeld, and J. Zico Kolter. Certified Adversarial Robustness via Randomized Smoothing.arXiv:1902.02918 [cs, stat], jun 2019. arXiv: 1902.02918

  20. [20]

    Querying and mining of time series data: experimental comparison of representations and distance measures.Proceedings of the VLDB Endowment, 1(2):1542–1552, 2008

    Hui Ding, Goce Trajcevski, Peter Scheuermann, Xiaoyue Wang, and Eamonn Keogh. Querying and mining of time series data: experimental comparison of representations and distance measures.Proceedings of the VLDB Endowment, 1(2):1542–1552, 2008

  21. [21]

    Adversarial distributional training for robust deep learning.Advances in Neural Information Processing Systems, 33:8270– 8283, 2020

    Yinpeng Dong, Zhijie Deng, Tianyu Pang, Jun Zhu, and Hang Su. Adversarial distributional training for robust deep learning.Advances in Neural Information Processing Systems, 33:8270– 8283, 2020

  22. [22]

    Predicting covid-19 mortality with electronic medical records

    Hossein Estiri, Zachary H Strasser, Jeffy G Klann, Pourandokht Naseri, Kavishwar B Wagho- likar, and Shawn N Murphy. Predicting covid-19 mortality with electronic medical records. NPJ digital medicine, 4(1):15, 2021

  23. [23]

    Deep learning with long short-term memory networks for financial market predictions.European Journal of operational research, 270(2):654–669, 2018

    Thomas Fischer and Christopher Krauss. Deep learning with long short-term memory networks for financial market predictions.European Journal of operational research, 270(2):654–669, 2018

  24. [24]

    Diffusion denoised smoothing for certified and adversarial robust out-of-distribution detection.arXiv preprint arXiv:2303.14961, 2023

    Nicola Franco, Daniel Korth, Jeanette Miriam Lorenz, Karsten Roscher, and Stephan Guenne- mann. Diffusion denoised smoothing for certified and adversarial robust out-of-distribution detection.arXiv preprint arXiv:2303.14961, 2023

  25. [25]

    Non-isometric transforms in time series classification using dtw.Knowledge-based systems, 61:98–108, 2014

    Tomasz Górecki and Maciej Łuczak. Non-isometric transforms in time series classification using dtw.Knowledge-based systems, 61:98–108, 2014

  26. [26]

    Multitask learning and benchmarking with clinical time series data.Scientific data, 6(1):96, 2019

    Hrayr Harutyunyan, Hrant Khachatrian, David C Kale, Greg Ver Steeg, and Aram Galstyan. Multitask learning and benchmarking with clinical time series data.Scientific data, 6(1):96, 2019

  27. [27]

    Boosting randomized smoothing with variance reduced classifiers.arXiv preprint arXiv:2106.06946, 2021

    Miklós Z Horváth, Mark Niklas Müller, Marc Fischer, and Martin Vechev. Boosting randomized smoothing with variance reduced classifiers.arXiv preprint arXiv:2106.06946, 2021

  28. [28]

    Detecting spacecraft anomalies using lstms and nonparametric dynamic thresholding

    Kyle Hundman, Valentino Constantinou, Christopher Laporte, Ian Colwell, and Tom Soderstrom. Detecting spacecraft anomalies using lstms and nonparametric dynamic thresholding. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’18, page 387–395, New York, NY , USA, 2018. Association for Computing Machinery

  29. [29]

    Exact indexing of dynamic time warping

    Eamonn Keogh and Chotirat Ann Ratanamahatana. Exact indexing of dynamic time warping. Knowledge and information systems, 7:358–386, 2005

  30. [30]

    Adaptedge: Targeted universal adversarial attacks on time series data in smart grids.IEEE Transactions on Smart Grid, 2024

    Sultan Uddin Khan, Mohammed Mynuddin, and Mahmoud Nabil. Adaptedge: Targeted universal adversarial attacks on time series data in smart grids.IEEE Transactions on Smart Grid, 2024

  31. [31]

    Revisiting time series outlier detection: Definitions and benchmarks

    Kwei-Herng Lai, Daochen Zha, Junjie Xu, Yue Zhao, Guanchu Wang, and Xia Hu. Revisiting time series outlier detection: Definitions and benchmarks. InThirty-fifth conference on neural information processing systems datasets and benchmarks track (round 1), 2021

  32. [32]

    Perceptual

    Cassidy Laidlaw, Sahil Singla, and Soheil Feizi. Perceptual adversarial robustness: Defense against unseen threat models.arXiv preprint arXiv:2006.12655, 2020

  33. [33]

    arXiv preprint arXiv:1802.03471 , year=

    Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. Certified Robustness to Adversarial Examples With Differential Privacy.arXiv:1802.03471 [cs, stat], may 2019. arXiv: 1802.03471. 11

  34. [34]

    Second-order ad- versarial attack and certifiable robustness.arXiv preprint arXiv:1809.03113,

    Bai Li, Changyou Chen, Wenlin Wang, and Lawrence Carin. Certified Adversarial Robustness With Additive Noise.arXiv:1809.03113 [cs, stat], nov 2019. arXiv: 1809.03113

  35. [35]

    Multivariate Financial Time-Series Prediction With Certified Robustness.IEEE Access, 8:109133–109143,

    Hui Li, Yunpeng Cui, Shuo Wang, Juan Liu, Jinyuan Qin, and Yilin Yang. Multivariate Financial Time-Series Prediction With Certified Robustness.IEEE Access, 8:109133–109143,

  36. [36]

    Conference Name: IEEE Access

  37. [37]

    Temporal dynamics clustering for analyzing cell behavior in mobile networks.Computer Networks, 223:109578, 2023

    Shuyang Li, Gianluca Francini, and Enrico Magli. Temporal dynamics clustering for analyzing cell behavior in mobile networks.Computer Networks, 223:109578, 2023

  38. [38]

    Multivariate time series anomaly detection and interpretation using hierarchical inter-metric and temporal embedding

    Zhihan Li, Youjian Zhao, Jiaqi Han, Ya Su, Rui Jiao, Xidao Wen, and Dan Pei. Multivariate time series anomaly detection and interpretation using hierarchical inter-metric and temporal embedding. InProceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining, pages 3220–3230, 2021

  39. [39]

    Time series classification with ensembles of elastic distance measures.Data Mining and Knowledge Discovery, 29:565–592, 2015

    Jason Lines and Anthony Bagnall. Time series classification with ensembles of elastic distance measures.Data Mining and Knowledge Discovery, 29:565–592, 2015

  40. [41]

    Towards Deep Learning Models Resistant to Adversarial Attacks

    Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards Deep Learning Models Resistant to Adversarial Attacks.arXiv:1706.06083 [cs, stat], sep 2019. arXiv: 1706.06083

  41. [42]

    Deepant: A deep learning approach for unsupervised anomaly detection in time series.IEEE Access, 7:1991–2005, 2018

    Mohsin Munir, Shoaib Ahmed Siddiqui, Andreas Dengel, and Sheraz Ahmed. Deepant: A deep learning approach for unsupervised anomaly detection in time series.IEEE Access, 7:1991–2005, 2018

  42. [43]

    Stock market’s price movement prediction with lstm neural networks

    David MQ Nelson, Adriano CM Pereira, and Renato A De Oliveira. Stock market’s price movement prediction with lstm neural networks. In2017 International joint conference on neural networks (IJCNN), pages 1419–1426. IEEE, 2017

  43. [44]

    Optimism in the face of adversity: Understanding and improving deep learning through adversarial robustness.Proceedings of the IEEE, 109(5):635–659, 2021

    Guillermo Ortiz-Jiménez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. Optimism in the face of adversity: Understanding and improving deep learning through adversarial robustness.Proceedings of the IEEE, 109(5):635–659, 2021

  44. [45]

    Understanding noise-augmented training for randomized smoothing.arXiv preprint arXiv:2305.04746, 2023

    Ambar Pal and Jeremias Sulam. Understanding noise-augmented training for randomized smoothing.arXiv preprint arXiv:2305.04746, 2023

  45. [46]

    Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks

    Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. In2016 IEEE Symposium on Security and Privacy (SP), pages 582–597. IEEE, 2016

  46. [47]

    A survey of robust ad- versarial training in pattern recognition: Fundamental, theory, and methodologies.Pattern Recognition, 131:108889, 2022

    Zhuang Qian, Kaizhu Huang, Qiu-Feng Wang, and Xu-Yao Zhang. A survey of robust ad- versarial training in pattern recognition: Fundamental, theory, and methodologies.Pattern Recognition, 131:108889, 2022

  47. [48]

    Scalable and accurate deep learning with electronic health records.NPJ digital medicine, 1(1):18, 2018

    Alvin Rajkomar, Eyal Oren, Kai Chen, Andrew M Dai, Nissan Hajaj, Michaela Hardt, Peter J Liu, Xiaobing Liu, Jake Marcus, Mimi Sun, et al. Scalable and accurate deep learning with electronic health records.NPJ digital medicine, 1(1):18, 2018

  48. [49]

    Lower-bounding of dynamic time warping distances for multivariate time series.University of Massachusetts Amherst Technical Report MM, 40:1–4, 2002

    Toni M Rath and R Manmatha. Lower-bounding of dynamic time warping distances for multivariate time series.University of Massachusetts Amherst Technical Report MM, 40:1–4, 2002

  49. [50]

    Smap l4 global 3-hourly 9 km ease-grid surface and root zone soil moisture geophysical data, version 8, 2025

    Rolf Reichle, Gabrielle De Lannoy, Randal Koster, Wade Crow, John Kimball, Qing Liu, and Michel Bechtold. Smap l4 global 3-hourly 9 km ease-grid surface and root zone soil moisture geophysical data, version 8, 2025

  50. [51]

    Time-series anomaly detection service at microsoft

    Hansheng Ren, Bixiong Xu, Yujing Wang, Chao Yi, Congrui Huang, Xiaoyu Kou, Tony Xing, Mao Yang, Jie Tong, and Qi Zhang. Time-series anomaly detection service at microsoft. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pages 3009–3017, 2019. 12

  51. [52]

    Time-series anomaly detection service at microsoft

    Hansheng Ren, Bixiong Xu, Yujing Wang, Chao Yi, Congrui Huang, Xiaoyu Kou, Tony Xing, Mao Yang, Jie Tong, and Qi Zhang. Time-series anomaly detection service at microsoft. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’19, page 3009–3017, New York, NY , USA, 2019. Association for Computing Machinery

  52. [53]

    Deep one-class classification

    Lukas Ruff, Robert Vandermeulen, Nico Goernitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Alexander Binder, Emmanuel Müller, and Marius Kloft. Deep one-class classification. In Jennifer Dy and Andreas Krause, editors,Proceedings of the 35th International Conference on Machine Learning, volume 80 ofProceedings of Machine Learning Research, pages 4393–4402. PMLR...

  53. [54]

    Dynamic programming algorithm optimization for spoken word recognition.IEEE Transactions on Acoustics, Speech, and Signal Processing, 26(1):43–49, 2003

    Hiroaki Sakoe and Seibi Chiba. Dynamic programming algorithm optimization for spoken word recognition.IEEE Transactions on Acoustics, Speech, and Signal Processing, 26(1):43–49, 2003

  54. [55]

    Provably robust deep learning via adversarially trained smoothed classifiers

    Hadi Salman, Jerry Li, Ilya Razenshteyn, Pengchuan Zhang, Huan Zhang, Sebastien Bubeck, and Greg Yang. Provably robust deep learning via adversarially trained smoothed classifiers. Advances in neural information processing systems, 32, 2019

  55. [56]

    Black-box smoothing: A provable defense for pretrained classifiers.arXiv preprint arXiv:2003.01908, 2(2), 2020

    Hadi Salman, Mingjie Sun, Greg Yang, Ashish Kapoor, and J Zico Kolter. Black-box smoothing: A provable defense for pretrained classifiers.arXiv preprint arXiv:2003.01908, 2(2), 2020

  56. [57]

    Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers.Advances in neural information processing systems, page 31, 2019

    Hadi Salman, Greg Yang, Jerry Li, Pengchuan Zhang, Huan Zhang, Ilya Razenshteyn, and Sébastien Bubeck. Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers.Advances in neural information processing systems, page 31, 2019

  57. [58]

    Time series forecasting model of future spectrum demands for mobile broadband networks in malaysia, turkey, and oman.Alexandria Engineering Journal, 61(10):8051–8067, 2022

    Ibraheem Shayea, Abdulraqeb Alhammadi, Ayman A El-Saleh, Wan Haslina Hassan, Hafizal Mohamad, and Mustafa Ergen. Time series forecasting model of future spectrum demands for mobile broadband networks in malaysia, turkey, and oman.Alexandria Engineering Journal, 61(10):8051–8067, 2022

  58. [59]

    Adversarial examples in constrained domains.arXiv preprint arXiv:2011.01183, 2020

    Ryan Sheatsley, Nicolas Papernot, Michael Weisman, Gunjan Verma, and Patrick McDaniel. Adversarial examples in constrained domains.arXiv preprint arXiv:2011.01183, 2020

  59. [60]

    Timeseries anomaly detection using temporal hierarchical one-class network

    Lifeng Shen, Zhuocong Li, and James Kwok. Timeseries anomaly detection using temporal hierarchical one-class network. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors,Advances in Neural Information Processing Systems, volume 33, pages 13016– 13026. Curran Associates, Inc., 2020

  60. [61]

    Robust anomaly detection for multivariate time series through stochastic recurrent neural network

    Ya Su, Youjian Zhao, Chenhao Niu, Rong Liu, Wei Sun, and Dan Pei. Robust anomaly detection for multivariate time series through stochastic recurrent neural network. InProceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’19, page 2828–2837, New York, NY , USA, 2019. Association for Computing Machinery

  61. [62]

    Robust anomaly detection for multivariate time series through stochastic recurrent neural network

    Yixin Su, Yongxin Zhao, Chao Niu, Rong Liu, Weijie Sun, and Jian Pei. Robust anomaly detection for multivariate time series through stochastic recurrent neural network. InProceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2828–2837. ACM, 2019

  62. [63]

    Towards an awareness of time series anomaly detection models’ adversarial vulnerability

    Shahroz Tariq, Binh M Le, and Simon S Woo. Towards an awareness of time series anomaly detection models’ adversarial vulnerability. InProceedings of the 31st ACM International Conference on Information & Knowledge Management, pages 3534–3544, 2022

  63. [64]

    Disentangled dynamic deviation transformer networks for multivariate time series anomaly detection.Sensors, 23(3):1104, 2023

    Chunzhi Wang, Shaowen Xing, Rong Gao, Lingyu Yan, Naixue Xiong, and Ruoxi Wang. Disentangled dynamic deviation transformer networks for multivariate time series anomaly detection.Sensors, 23(3):1104, 2023

  64. [65]

    Towards a robust deep neural network in texts: A survey.arXiv preprint arXiv:1902.07285, 2019

    Wenqi Wang, Run Wang, Lina Wang, Zhibo Wang, and Aoshuang Ye. Towards a robust deep neural network in texts: A survey.arXiv preprint arXiv:1902.07285, 2019

  65. [66]

    Survey on the application of deep learning in algorithmic trading.Data Science in Finance and Economics, 1(4):345–361, 2021

    Yongfeng Wang and Guofeng Yan. Survey on the application of deep learning in algorithmic trading.Data Science in Finance and Economics, 1(4):345–361, 2021. 13

  66. [67]

    Invisible adversarial attack against deep neural networks: An adaptive penalization approach.IEEE Transactions on Dependable and Secure Computing, 18(3):1474–1488, 2019

    Zhibo Wang, Mengkai Song, Siyan Zheng, Zhifei Zhang, Yang Song, and Qian Wang. Invisible adversarial attack against deep neural networks: An adaptive penalization approach.IEEE Transactions on Dependable and Secure Computing, 18(3):1474–1488, 2019

  67. [68]

    Defenses in adversarial machine learning: A survey.arXiv preprint arXiv:2312.08890, 2023

    Baoyuan Wu, Shaokui Wei, Mingli Zhu, Meixi Zheng, Zihao Zhu, Mingda Zhang, Hongrui Chen, Danni Yuan, Li Liu, and Qingshan Liu. Defenses in adversarial machine learning: A survey.arXiv preprint arXiv:2312.08890, 2023

  68. [69]

    Timesnet: Temporal 2d-variation modeling for general time series analysis

    Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, and Mingsheng Long. Timesnet: Temporal 2d-variation modeling for general time series analysis. InInternational Conference on Learning Representations, 2023

  69. [70]

    Current time series anomaly detection benchmarks are flawed and are creating the illusion of progress.IEEE Transactions on Knowledge and Data Engineering, 35(3):2421–2429, 2021

    Renjie Wu and Eamonn J Keogh. Current time series anomaly detection benchmarks are flawed and are creating the illusion of progress.IEEE Transactions on Knowledge and Data Engineering, 35(3):2421–2429, 2021

  70. [71]

    Small perturbations are enough: Adversarial attacks on time series prediction.Information Sciences, 587:794–812, 2022

    Tao Wu, Xuechun Wang, Shaojie Qiao, Xingping Xian, Yanbing Liu, and Liang Zhang. Small perturbations are enough: Adversarial attacks on time series prediction.Information Sciences, 587:794–812, 2022

  71. [72]

    Big data driven mobile traffic understanding and forecasting: A time series approach.IEEE Transactions on Services Computing, 9(5):796–805, 2016

    Fengli Xu, Yuyun Lin, Jiaxin Huang, Di Wu, Hongzhi Shi, Jeungeun Song, and Yong Li. Big data driven mobile traffic understanding and forecasting: A time series approach.IEEE Transactions on Services Computing, 9(5):796–805, 2016

  72. [73]

    Calibrated one-class classification for unsupervised time series anomaly detection.arXiv preprint arXiv:2207.12201, 2022

    Hongzuo Xu, Yijie Wang, Songlei Jian, Qing Liao, Yongjun Wang, and Guansong Pang. Calibrated one-class classification for unsupervised time series anomaly detection.arXiv preprint arXiv:2207.12201, 2022

  73. [74]

    Edward Hu, Hadi Salman, Ilya Razenshteyn, and Jerry Li

    Greg Yang, Tony Duan, J. Edward Hu, Hadi Salman, Ilya Razenshteyn, and Jerry Li. Random- ized Smoothing of All Shapes and Sizes. InProceedings of the 37th International Conference on Machine Learning, pages 10693–10705. Pmlr, nov 2020. Issn: 2640-3498

  74. [75]

    Randomized smoothing of all shapes and sizes

    Greg Yang, Tony Duan, J Edward Hu, Hadi Salman, Ilya Razenshteyn, and Jerry Li. Randomized smoothing of all shapes and sizes. InInternational Conference on Machine Learning, pages 10693–10705. PMLR, 2020

  75. [76]

    Tsadv: Black-box adversarial attack on time series with local perturbations.Engineering Applications of Artificial Intelligence, 114:105218, 2022

    Wenbo Yang, Jidong Yuan, Xiaokang Wang, and Peixiang Zhao. Tsadv: Black-box adversarial attack on time series with local perturbations.Engineering Applications of Artificial Intelligence, 114:105218, 2022

  76. [77]

    Automated discovery of adaptive attacks on adversarial defenses.Advances in Neural Information Processing Systems, 34:26858–26870, 2021

    Chengyuan Yao, Pavol Bielik, Petar Tsankov, and Martin Vechev. Automated discovery of adaptive attacks on adversarial defenses.Advances in Neural Information Processing Systems, 34:26858–26870, 2021

  77. [78]

    Limitations

    Wei Emma Zhang, Quan Z Sheng, Ahoud Alhazmi, and Chenliang Li. Adversarial attacks on deep-learning models in natural language processing: A survey.ACM Transactions on Intelligent Systems and Technology (TIST), 11(3):1–41, 2020. 14 NeurIPS Paper Checklist 1.Claims Question: Do the main claims made in the abstract and introduction accurately reflect the pa...

  78. [79]

    Guidelines: • The answer NA means that the paper does not involve crowdsourcing nor research with human subjects

    Institutional review board (IRB) approvals or equivalent for research with human subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or ...

  79. [80]

    Replace Gaussian noise with a norm-symmetric distribution

  80. [81]

    Replace the Gaussian CDFΦby the 1-D marginal CDFFof that noise

Showing first 80 references.