Recognition: 2 theorem links
· Lean TheoremFortifying Time Series: DTW-Certified Robust Anomaly Detection
Pith reviewed 2026-05-11 03:14 UTC · model grok-4.3
The pith
Time-series anomaly detection gains its first certified robustness under Dynamic Time Warping by adapting randomized smoothing with a lower-bound transformation from l_p norms.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
We introduce the first DTW-certified robust defense in time-series anomaly detection by adapting the randomized smoothing paradigm. We develop this certificate by bridging the l_p-norm to DTW distance through a lower-bound transformation. Extensive experiments across various datasets and models validate the effectiveness and practicality of our theoretical approach.
What carries the argument
The lower-bound transformation from l_p-norm perturbations to Dynamic Time Warping (DTW) distances that enables the construction of certified radii for randomized smoothing in the DTW metric.
If this is right
- Certified radii can now be provided for time-series anomaly models against DTW-based attacks rather than only l_p attacks.
- Models achieve improved robustness, with up to 18.7% higher F1-scores under DTW adversarial attacks compared to traditional certified models.
- The defense applies to various time-series datasets and underlying detection models.
- Practical computation of certificates is possible without optimizing directly in the DTW space.
Where Pith is reading between the lines
- This method could be adapted for other temporal machine learning tasks such as classification or forecasting that rely on DTW similarity.
- Future work might focus on deriving tighter lower bounds to increase the size of usable certified radii.
- Combining this DTW certificate with domain-specific constraints on time series could further strengthen guarantees against realistic attacks.
Load-bearing premise
The lower-bound transformation from l_p-norm to DTW distance yields a sufficiently tight certificate for practical models and attack strengths.
What would settle it
A demonstration that a smoothed model can be fooled by a DTW perturbation whose size is within the certified radius computed via the l_p-to-DTW lower bound would falsify the certificate.
Figures
read the original abstract
Time-series anomaly detection is critical for ensuring safety in high-stakes applications, where robustness is a fundamental requirement rather than a mere performance metric. Addressing the vulnerability of these systems to adversarial manipulation is therefore essential. Existing defenses are largely heuristic or provide certified robustness only under $\ell_p$-norm constraints, which are incompatible with time-series data. In particular, $\ell_p$-norm fails to capture the intrinsic temporal structure in time series, causing small temporal distortions to significantly alter the $\ell_p$-norm measures. Instead, the similarity metric \emph{Dynamic Time Warping} (DTW) is more suitable and widely adopted in the time-series domain, as DTW accounts for temporal alignment and remains robust to temporal variations. To date, however, there has been no certifiable robustness result in this metric that provides guarantees. In this work, we introduce the first \emph{DTW-certified robust defense} in time-series anomaly detection by adapting the randomized smoothing paradigm. We develop this certificate by bridging the $\ell_p$-norm to DTW distance through a lower-bound transformation. Extensive experiments across various datasets and models validate the effectiveness and practicality of our theoretical approach. Results demonstrate significantly improved performance, e.g., up to 18.7\% in F1-score under DTW-based adversarial attacks compared to traditional certified models.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper claims to introduce the first DTW-certified robust defense for time-series anomaly detection by adapting the randomized smoothing framework and bridging the ℓ_p-norm to DTW distance via a lower-bound transformation. It reports empirical gains of up to 18.7% F1-score under DTW-based adversarial attacks across datasets and models.
Significance. A sound, non-vacuous DTW certificate would address a genuine gap, since ℓ_p-norm certificates are known to be mismatched to temporal data and DTW is the de-facto similarity measure in the domain. The randomized-smoothing adaptation is a standard and reusable technique; if the lower-bound transformation yields usable radii and the experiments include certified-radius tables, the contribution would be solid.
major comments (2)
- [Abstract and §3] Abstract and §3 (theoretical construction): the central claim rests on a lower-bound transformation DTW(x,y) ≥ c · ||x-y||_p (or equivalent) that converts an ℓ_p randomized-smoothing certificate into a DTW certificate. No explicit form of the constant c, no derivation, and no tightness analysis are supplied, so it is impossible to determine whether the resulting DTW radius is non-trivial for realistic attack strengths.
- [§4] §4 (experiments): the reported F1 improvements are given only under DTW-based attacks; the paper must also report the actual certified DTW radii (or the effective c values) achieved by the smoothed models and compare them to the distances of the evaluated attacks. Without these numbers the practicality claim cannot be assessed.
minor comments (2)
- [§3] Notation for the lower-bound transformation should be introduced with a numbered equation rather than inline text.
- [§2] Clarify whether the base anomaly detector is retrained or only smoothed; the current description leaves this ambiguous.
Simulated Author's Rebuttal
We thank the referee for the constructive comments, which help clarify the presentation of our DTW-certified robustness results. We address each major point below and will revise the manuscript accordingly to strengthen both the theoretical exposition and the experimental evaluation.
read point-by-point responses
-
Referee: [Abstract and §3] Abstract and §3 (theoretical construction): the central claim rests on a lower-bound transformation DTW(x,y) ≥ c · ||x-y||_p (or equivalent) that converts an ℓ_p randomized-smoothing certificate into a DTW certificate. No explicit form of the constant c, no derivation, and no tightness analysis are supplied, so it is impossible to determine whether the resulting DTW radius is non-trivial for realistic attack strengths.
Authors: We agree that the explicit form of the constant c, its derivation, and a tightness analysis are necessary for readers to evaluate the strength of the resulting DTW certificate. The lower-bound transformation is introduced in §3, but the full derivation and explicit c were omitted from the initial submission. In the revision we will insert the complete derivation of DTW(x,y) ≥ c · ||x-y||_p (including the precise definition of c), together with a tightness discussion that quantifies how close the bound is to equality on representative time-series data. This addition will make it possible to assess whether the certified DTW radii remain non-vacuous for realistic attack strengths. revision: yes
-
Referee: [§4] §4 (experiments): the reported F1 improvements are given only under DTW-based attacks; the paper must also report the actual certified DTW radii (or the effective c values) achieved by the smoothed models and compare them to the distances of the evaluated attacks. Without these numbers the practicality claim cannot be assessed.
Authors: We concur that reporting the certified DTW radii and effective c values is required to substantiate the practicality of the defense. The current experiments focus on F1-score gains under DTW attacks, but do not tabulate the corresponding certified radii. In the revised manuscript we will add tables (or supplementary figures) that list, for each dataset and model, the certified DTW radius obtained via the lower-bound transformation, the effective c realized by the smoothed classifier, and a direct comparison against the DTW distances of the adversarial examples used in the attack evaluation. These numbers will be placed alongside the existing F1 results so that readers can judge both empirical improvement and the magnitude of the certified guarantee. revision: yes
Circularity Check
No significant circularity; derivation adapts randomized smoothing via independent lower-bound transformation
full rationale
The paper's central derivation adapts the established randomized-smoothing paradigm (an external framework) to produce DTW certificates by introducing a lower-bound transformation that relates ℓ_p-norm to DTW distance. This bridging step is presented as a novel contribution developed within the work rather than a quantity fitted to the target result or defined in terms of itself. No load-bearing self-citations, uniqueness theorems imported from the authors' prior work, or ansatzes smuggled via citation are indicated in the abstract or described claims. The certificate is not shown to reduce by construction to its inputs; the transformation is an additional mathematical step whose tightness is a separate empirical question. The derivation chain therefore remains self-contained against external benchmarks and does not exhibit any of the enumerated circularity patterns.
Axiom & Free-Parameter Ledger
axioms (2)
- standard math Randomized smoothing yields certified robustness radii under l_p norms for the base classifier
- domain assumption A lower-bound transformation exists that relates l_p distance to DTW distance in a way that preserves useful certified radii
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquationwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We develop this certificate by bridging the ℓ_p-norm to DTW distance through a lower-bound transformation... e = inf{LB(x, x′) | ∥x−x′∥ > r} (Lemma 3.2, Theorem 3.3)
-
IndisputableMonolith/Foundation/RealityFromDistinctionreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We introduce the first DTW-certified robust defense... by adapting the randomized smoothing paradigm
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Naveed Akhtar and Ajmal Mian. Threat of adversarial attacks on deep learning in computer vision: A survey.IEEE Access, 6:14410–14430, 2018
work page 2018
-
[2]
Naveed Akhtar, Ajmal Mian, Navid Kardan, and Mubarak Shah. Advances in adversarial attacks and defenses in computer vision: A survey.IEEE Access, 9:155161–155196, 2021
work page 2021
-
[3]
Jean-Baptiste Alayrac, Jonathan Uesato, Po-Sen Huang, Alhussein Fawzi, Robert Stanforth, and Pushmeet Kohli. Are labels required for improving adversarial robustness?Advances in Neural Information Processing Systems, 32, 2019
work page 2019
-
[4]
Julien Audibert, Pietro Michiardi, Frédéric Guyard, Sébastien Marti, and Maria A. Zuluaga. Usad: Unsupervised anomaly detection on multivariate time series. InProceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’20, page 3395–3404, New York, NY , USA, 2020. Association for Computing Machinery
work page 2020
-
[5]
Taha Belkhouja and Janardhan Rao Doppa. Adversarial Framework With Certified Robustness for Time-Series Domain via Statistical Features.Journal of Artificial Intelligence Research, 73:1435–1471, apr 2022. arXiv:2207.04307 [cs]
-
[6]
Taha Belkhouja, Yan Yan, and Janardhan Rao Doppa. Dynamic time warping based adversarial framework for time-series domain.IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(6):7353–7366, 2022
work page 2022
-
[7]
Ane Blázquez-García, Angel Conde, Usue Mori, and Jose A Lozano. A review on out- lier/anomaly detection in time series data.ACM computing surveys (CSUR), 54(3):1–33, 2021
work page 2021
-
[8]
Yuanpu Cao, Lu Lin, and Jinghui Chen. Adversarially robust industrial anomaly detection through diffusion model.arXiv preprint arXiv:2408.04839, 2024
-
[9]
On Evaluating Adversarial Robustness
Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian Goodfellow, Aleksander Madry, and Alexey Kurakin. On evaluating adversarial robustness.arXiv preprint arXiv:1902.06705, 2019
-
[10]
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini and David Wagner. Towards Evaluating the Robustness of Neural Networks. In2017 IEEE Symposium on security and privacy (sp), pages 39–57. IEEE, 2017
work page 2017
-
[11]
Adversarial Attacks and Defences: A Survey
Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. Adversarial attacks and defences: A survey.arXiv preprint arXiv:1810.00069, 2018
work page Pith review arXiv 2018
-
[12]
Pin-Yu Chen and Cho-Jui Hsieh.Adversarial robustness for machine learning. Academic Press, 2022
work page 2022
-
[13]
Wenchao Chen, Long Tian, Bo Chen, Liang Dai, Zhibin Duan, and Mingyuan Zhou. Deep variational graph convolutional recurrent network for multivariate time series anomaly detection. InInternational conference on machine learning, pages 3621–3633. PMLR, 2022
work page 2022
-
[14]
Detection as regression: Certified object detection with median smoothing
Ping-yeh Chiang, Michael Curry, Ahmed Abdelkader, Aounon Kumar, John Dickerson, and Tom Goldstein. Detection as regression: Certified object detection with median smoothing. Advances in Neural Information Processing Systems, 33:1275–1286, 2020
work page 2020
-
[15]
Curry, Ahmed Abdelkader, Aounon Kumar, John Dickerson, and Tom Goldstein
Ping-yeh Chiang, Michael J. Curry, Ahmed Abdelkader, Aounon Kumar, John Dickerson, and Tom Goldstein. Detection as Regression: Certified Object Detection by Median Smoothing, feb
- [16]
-
[17]
Kukjin Choi, Jihun Yi, Changhwa Park, and Sungroh Yoon. Deep learning for anomaly detection in time-series data: Review, analysis, and guidelines.IEEE Access, 9:120043–120065, 2021
work page 2021
-
[18]
Certified adversarial robustness via randomized smoothing
Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. Certified adversarial robustness via randomized smoothing. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors,Proceedings of the 36th International Conference on Machine Learning, volume 97 ofProceedings of Machine Learning Research, pages 1310–1320. PMLR, 09–15 Jun 2019
work page 2019
-
[19]
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen, Elan Rosenfeld, and J. Zico Kolter. Certified Adversarial Robustness via Randomized Smoothing.arXiv:1902.02918 [cs, stat], jun 2019. arXiv: 1902.02918
work page Pith review arXiv 1902
-
[20]
Hui Ding, Goce Trajcevski, Peter Scheuermann, Xiaoyue Wang, and Eamonn Keogh. Querying and mining of time series data: experimental comparison of representations and distance measures.Proceedings of the VLDB Endowment, 1(2):1542–1552, 2008
work page 2008
-
[21]
Yinpeng Dong, Zhijie Deng, Tianyu Pang, Jun Zhu, and Hang Su. Adversarial distributional training for robust deep learning.Advances in Neural Information Processing Systems, 33:8270– 8283, 2020
work page 2020
-
[22]
Predicting covid-19 mortality with electronic medical records
Hossein Estiri, Zachary H Strasser, Jeffy G Klann, Pourandokht Naseri, Kavishwar B Wagho- likar, and Shawn N Murphy. Predicting covid-19 mortality with electronic medical records. NPJ digital medicine, 4(1):15, 2021
work page 2021
-
[23]
Thomas Fischer and Christopher Krauss. Deep learning with long short-term memory networks for financial market predictions.European Journal of operational research, 270(2):654–669, 2018
work page 2018
-
[24]
Nicola Franco, Daniel Korth, Jeanette Miriam Lorenz, Karsten Roscher, and Stephan Guenne- mann. Diffusion denoised smoothing for certified and adversarial robust out-of-distribution detection.arXiv preprint arXiv:2303.14961, 2023
-
[25]
Tomasz Górecki and Maciej Łuczak. Non-isometric transforms in time series classification using dtw.Knowledge-based systems, 61:98–108, 2014
work page 2014
-
[26]
Multitask learning and benchmarking with clinical time series data.Scientific data, 6(1):96, 2019
Hrayr Harutyunyan, Hrant Khachatrian, David C Kale, Greg Ver Steeg, and Aram Galstyan. Multitask learning and benchmarking with clinical time series data.Scientific data, 6(1):96, 2019
work page 2019
-
[27]
Miklós Z Horváth, Mark Niklas Müller, Marc Fischer, and Martin Vechev. Boosting randomized smoothing with variance reduced classifiers.arXiv preprint arXiv:2106.06946, 2021
-
[28]
Detecting spacecraft anomalies using lstms and nonparametric dynamic thresholding
Kyle Hundman, Valentino Constantinou, Christopher Laporte, Ian Colwell, and Tom Soderstrom. Detecting spacecraft anomalies using lstms and nonparametric dynamic thresholding. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’18, page 387–395, New York, NY , USA, 2018. Association for Computing Machinery
work page 2018
-
[29]
Exact indexing of dynamic time warping
Eamonn Keogh and Chotirat Ann Ratanamahatana. Exact indexing of dynamic time warping. Knowledge and information systems, 7:358–386, 2005
work page 2005
-
[30]
Sultan Uddin Khan, Mohammed Mynuddin, and Mahmoud Nabil. Adaptedge: Targeted universal adversarial attacks on time series data in smart grids.IEEE Transactions on Smart Grid, 2024
work page 2024
-
[31]
Revisiting time series outlier detection: Definitions and benchmarks
Kwei-Herng Lai, Daochen Zha, Junjie Xu, Yue Zhao, Guanchu Wang, and Xia Hu. Revisiting time series outlier detection: Definitions and benchmarks. InThirty-fifth conference on neural information processing systems datasets and benchmarks track (round 1), 2021
work page 2021
-
[32]
Cassidy Laidlaw, Sahil Singla, and Soheil Feizi. Perceptual adversarial robustness: Defense against unseen threat models.arXiv preprint arXiv:2006.12655, 2020
-
[33]
arXiv preprint arXiv:1802.03471 , year=
Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. Certified Robustness to Adversarial Examples With Differential Privacy.arXiv:1802.03471 [cs, stat], may 2019. arXiv: 1802.03471. 11
-
[34]
Second-order ad- versarial attack and certifiable robustness.arXiv preprint arXiv:1809.03113,
Bai Li, Changyou Chen, Wenlin Wang, and Lawrence Carin. Certified Adversarial Robustness With Additive Noise.arXiv:1809.03113 [cs, stat], nov 2019. arXiv: 1809.03113
-
[35]
Hui Li, Yunpeng Cui, Shuo Wang, Juan Liu, Jinyuan Qin, and Yilin Yang. Multivariate Financial Time-Series Prediction With Certified Robustness.IEEE Access, 8:109133–109143,
-
[36]
Conference Name: IEEE Access
-
[37]
Shuyang Li, Gianluca Francini, and Enrico Magli. Temporal dynamics clustering for analyzing cell behavior in mobile networks.Computer Networks, 223:109578, 2023
work page 2023
-
[38]
Zhihan Li, Youjian Zhao, Jiaqi Han, Ya Su, Rui Jiao, Xidao Wen, and Dan Pei. Multivariate time series anomaly detection and interpretation using hierarchical inter-metric and temporal embedding. InProceedings of the 27th ACM SIGKDD conference on knowledge discovery & data mining, pages 3220–3230, 2021
work page 2021
-
[39]
Jason Lines and Anthony Bagnall. Time series classification with ensembles of elastic distance measures.Data Mining and Knowledge Discovery, 29:565–592, 2015
work page 2015
-
[41]
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards Deep Learning Models Resistant to Adversarial Attacks.arXiv:1706.06083 [cs, stat], sep 2019. arXiv: 1706.06083
work page internal anchor Pith review arXiv 2019
-
[42]
Mohsin Munir, Shoaib Ahmed Siddiqui, Andreas Dengel, and Sheraz Ahmed. Deepant: A deep learning approach for unsupervised anomaly detection in time series.IEEE Access, 7:1991–2005, 2018
work page 1991
-
[43]
Stock market’s price movement prediction with lstm neural networks
David MQ Nelson, Adriano CM Pereira, and Renato A De Oliveira. Stock market’s price movement prediction with lstm neural networks. In2017 International joint conference on neural networks (IJCNN), pages 1419–1426. IEEE, 2017
work page 2017
-
[44]
Guillermo Ortiz-Jiménez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. Optimism in the face of adversity: Understanding and improving deep learning through adversarial robustness.Proceedings of the IEEE, 109(5):635–659, 2021
work page 2021
-
[45]
Ambar Pal and Jeremias Sulam. Understanding noise-augmented training for randomized smoothing.arXiv preprint arXiv:2305.04746, 2023
-
[46]
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. In2016 IEEE Symposium on Security and Privacy (SP), pages 582–597. IEEE, 2016
work page 2016
-
[47]
Zhuang Qian, Kaizhu Huang, Qiu-Feng Wang, and Xu-Yao Zhang. A survey of robust ad- versarial training in pattern recognition: Fundamental, theory, and methodologies.Pattern Recognition, 131:108889, 2022
work page 2022
-
[48]
Alvin Rajkomar, Eyal Oren, Kai Chen, Andrew M Dai, Nissan Hajaj, Michaela Hardt, Peter J Liu, Xiaobing Liu, Jake Marcus, Mimi Sun, et al. Scalable and accurate deep learning with electronic health records.NPJ digital medicine, 1(1):18, 2018
work page 2018
-
[49]
Toni M Rath and R Manmatha. Lower-bounding of dynamic time warping distances for multivariate time series.University of Massachusetts Amherst Technical Report MM, 40:1–4, 2002
work page 2002
-
[50]
Rolf Reichle, Gabrielle De Lannoy, Randal Koster, Wade Crow, John Kimball, Qing Liu, and Michel Bechtold. Smap l4 global 3-hourly 9 km ease-grid surface and root zone soil moisture geophysical data, version 8, 2025
work page 2025
-
[51]
Time-series anomaly detection service at microsoft
Hansheng Ren, Bixiong Xu, Yujing Wang, Chao Yi, Congrui Huang, Xiaoyu Kou, Tony Xing, Mao Yang, Jie Tong, and Qi Zhang. Time-series anomaly detection service at microsoft. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pages 3009–3017, 2019. 12
work page 2019
-
[52]
Time-series anomaly detection service at microsoft
Hansheng Ren, Bixiong Xu, Yujing Wang, Chao Yi, Congrui Huang, Xiaoyu Kou, Tony Xing, Mao Yang, Jie Tong, and Qi Zhang. Time-series anomaly detection service at microsoft. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’19, page 3009–3017, New York, NY , USA, 2019. Association for Computing Machinery
work page 2019
-
[53]
Lukas Ruff, Robert Vandermeulen, Nico Goernitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Alexander Binder, Emmanuel Müller, and Marius Kloft. Deep one-class classification. In Jennifer Dy and Andreas Krause, editors,Proceedings of the 35th International Conference on Machine Learning, volume 80 ofProceedings of Machine Learning Research, pages 4393–4402. PMLR...
work page 2018
-
[54]
Hiroaki Sakoe and Seibi Chiba. Dynamic programming algorithm optimization for spoken word recognition.IEEE Transactions on Acoustics, Speech, and Signal Processing, 26(1):43–49, 2003
work page 2003
-
[55]
Provably robust deep learning via adversarially trained smoothed classifiers
Hadi Salman, Jerry Li, Ilya Razenshteyn, Pengchuan Zhang, Huan Zhang, Sebastien Bubeck, and Greg Yang. Provably robust deep learning via adversarially trained smoothed classifiers. Advances in neural information processing systems, 32, 2019
work page 2019
-
[56]
Hadi Salman, Mingjie Sun, Greg Yang, Ashish Kapoor, and J Zico Kolter. Black-box smoothing: A provable defense for pretrained classifiers.arXiv preprint arXiv:2003.01908, 2(2), 2020
-
[57]
Hadi Salman, Greg Yang, Jerry Li, Pengchuan Zhang, Huan Zhang, Ilya Razenshteyn, and Sébastien Bubeck. Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers.Advances in neural information processing systems, page 31, 2019
work page 2019
-
[58]
Ibraheem Shayea, Abdulraqeb Alhammadi, Ayman A El-Saleh, Wan Haslina Hassan, Hafizal Mohamad, and Mustafa Ergen. Time series forecasting model of future spectrum demands for mobile broadband networks in malaysia, turkey, and oman.Alexandria Engineering Journal, 61(10):8051–8067, 2022
work page 2022
-
[59]
Adversarial examples in constrained domains.arXiv preprint arXiv:2011.01183, 2020
Ryan Sheatsley, Nicolas Papernot, Michael Weisman, Gunjan Verma, and Patrick McDaniel. Adversarial examples in constrained domains.arXiv preprint arXiv:2011.01183, 2020
-
[60]
Timeseries anomaly detection using temporal hierarchical one-class network
Lifeng Shen, Zhuocong Li, and James Kwok. Timeseries anomaly detection using temporal hierarchical one-class network. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors,Advances in Neural Information Processing Systems, volume 33, pages 13016– 13026. Curran Associates, Inc., 2020
work page 2020
-
[61]
Robust anomaly detection for multivariate time series through stochastic recurrent neural network
Ya Su, Youjian Zhao, Chenhao Niu, Rong Liu, Wei Sun, and Dan Pei. Robust anomaly detection for multivariate time series through stochastic recurrent neural network. InProceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’19, page 2828–2837, New York, NY , USA, 2019. Association for Computing Machinery
work page 2019
-
[62]
Robust anomaly detection for multivariate time series through stochastic recurrent neural network
Yixin Su, Yongxin Zhao, Chao Niu, Rong Liu, Weijie Sun, and Jian Pei. Robust anomaly detection for multivariate time series through stochastic recurrent neural network. InProceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2828–2837. ACM, 2019
work page 2019
-
[63]
Towards an awareness of time series anomaly detection models’ adversarial vulnerability
Shahroz Tariq, Binh M Le, and Simon S Woo. Towards an awareness of time series anomaly detection models’ adversarial vulnerability. InProceedings of the 31st ACM International Conference on Information & Knowledge Management, pages 3534–3544, 2022
work page 2022
-
[64]
Chunzhi Wang, Shaowen Xing, Rong Gao, Lingyu Yan, Naixue Xiong, and Ruoxi Wang. Disentangled dynamic deviation transformer networks for multivariate time series anomaly detection.Sensors, 23(3):1104, 2023
work page 2023
-
[65]
Towards a robust deep neural network in texts: A survey.arXiv preprint arXiv:1902.07285, 2019
Wenqi Wang, Run Wang, Lina Wang, Zhibo Wang, and Aoshuang Ye. Towards a robust deep neural network in texts: A survey.arXiv preprint arXiv:1902.07285, 2019
-
[66]
Yongfeng Wang and Guofeng Yan. Survey on the application of deep learning in algorithmic trading.Data Science in Finance and Economics, 1(4):345–361, 2021. 13
work page 2021
-
[67]
Zhibo Wang, Mengkai Song, Siyan Zheng, Zhifei Zhang, Yang Song, and Qian Wang. Invisible adversarial attack against deep neural networks: An adaptive penalization approach.IEEE Transactions on Dependable and Secure Computing, 18(3):1474–1488, 2019
work page 2019
-
[68]
Defenses in adversarial machine learning: A survey.arXiv preprint arXiv:2312.08890, 2023
Baoyuan Wu, Shaokui Wei, Mingli Zhu, Meixi Zheng, Zihao Zhu, Mingda Zhang, Hongrui Chen, Danni Yuan, Li Liu, and Qingshan Liu. Defenses in adversarial machine learning: A survey.arXiv preprint arXiv:2312.08890, 2023
-
[69]
Timesnet: Temporal 2d-variation modeling for general time series analysis
Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, and Mingsheng Long. Timesnet: Temporal 2d-variation modeling for general time series analysis. InInternational Conference on Learning Representations, 2023
work page 2023
-
[70]
Renjie Wu and Eamonn J Keogh. Current time series anomaly detection benchmarks are flawed and are creating the illusion of progress.IEEE Transactions on Knowledge and Data Engineering, 35(3):2421–2429, 2021
work page 2021
-
[71]
Tao Wu, Xuechun Wang, Shaojie Qiao, Xingping Xian, Yanbing Liu, and Liang Zhang. Small perturbations are enough: Adversarial attacks on time series prediction.Information Sciences, 587:794–812, 2022
work page 2022
-
[72]
Fengli Xu, Yuyun Lin, Jiaxin Huang, Di Wu, Hongzhi Shi, Jeungeun Song, and Yong Li. Big data driven mobile traffic understanding and forecasting: A time series approach.IEEE Transactions on Services Computing, 9(5):796–805, 2016
work page 2016
-
[73]
Hongzuo Xu, Yijie Wang, Songlei Jian, Qing Liao, Yongjun Wang, and Guansong Pang. Calibrated one-class classification for unsupervised time series anomaly detection.arXiv preprint arXiv:2207.12201, 2022
-
[74]
Edward Hu, Hadi Salman, Ilya Razenshteyn, and Jerry Li
Greg Yang, Tony Duan, J. Edward Hu, Hadi Salman, Ilya Razenshteyn, and Jerry Li. Random- ized Smoothing of All Shapes and Sizes. InProceedings of the 37th International Conference on Machine Learning, pages 10693–10705. Pmlr, nov 2020. Issn: 2640-3498
work page 2020
-
[75]
Randomized smoothing of all shapes and sizes
Greg Yang, Tony Duan, J Edward Hu, Hadi Salman, Ilya Razenshteyn, and Jerry Li. Randomized smoothing of all shapes and sizes. InInternational Conference on Machine Learning, pages 10693–10705. PMLR, 2020
work page 2020
-
[76]
Wenbo Yang, Jidong Yuan, Xiaokang Wang, and Peixiang Zhao. Tsadv: Black-box adversarial attack on time series with local perturbations.Engineering Applications of Artificial Intelligence, 114:105218, 2022
work page 2022
-
[77]
Chengyuan Yao, Pavol Bielik, Petar Tsankov, and Martin Vechev. Automated discovery of adaptive attacks on adversarial defenses.Advances in Neural Information Processing Systems, 34:26858–26870, 2021
work page 2021
-
[78]
Wei Emma Zhang, Quan Z Sheng, Ahoud Alhazmi, and Chenliang Li. Adversarial attacks on deep-learning models in natural language processing: A survey.ACM Transactions on Intelligent Systems and Technology (TIST), 11(3):1–41, 2020. 14 NeurIPS Paper Checklist 1.Claims Question: Do the main claims made in the abstract and introduction accurately reflect the pa...
work page 2020
-
[79]
Institutional review board (IRB) approvals or equivalent for research with human subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or ...
work page 2025
-
[80]
Replace Gaussian noise with a norm-symmetric distribution
-
[81]
Replace the Gaussian CDFΦby the 1-D marginal CDFFof that noise
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.