pith. machine review for the scientific record. sign in

arxiv: 2604.23474 · v1 · submitted 2026-04-25 · 💻 cs.LG

Recognition: unknown

GeoCert: Certified Geometric AI for Reliable Forecasting

Honggang Wen, Kwok-Yan Lam, Pietro Li\`o, Regina Zhang, Siu-Ming Yiu, Xiaofeng Liu, Zongru Li

Authors on Pith no claims yet

Pith reviewed 2026-05-08 08:08 UTC · model grok-4.3

classification 💻 cs.LG
keywords hyperbolic geometrycertified AIforecasting modelsphysical consistencyformal verificationgeometric deep learningcontraction dynamicsdifferentiable verification
0
0 comments X

The pith

GeoCert models forecasting as evolution on a hyperbolic manifold to deliver accurate, physically consistent, and formally certifiable predictions in one differentiable system.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

GeoCert aims to solve the problem of creating forecasting systems that are not only accurate but also obey physical laws and can be formally verified as reliable. It achieves this by treating the prediction process as movement along a hyperbolic space whose negative curvature creates natural contraction and makes verification fast. The model uses a two-level constraint system to keep universal scientific rules separate from the details of any one field like finance or climate. If successful, this would let scientists trust AI forecasts enough to use them for important decisions without running separate checks afterward. Readers should care because it shifts forecasting from guesswork that sometimes fails to something that comes with built-in proofs of correctness.

Core claim

The central claim is that by formulating forecasting as evolution along a hyperbolic manifold with negative curvature, which induces contraction dynamics for intrinsic robustness and allows certification in logarithmic time, and by using a hierarchical constraint architecture to separate universal physical laws from domain-specific dynamics, GeoCert unifies forecasting, physical reasoning, and formal verification in a single differentiable computation, achieving state-of-the-art accuracy with 97.5% reduced computational cost and improved certification rates.

What carries the argument

Hyperbolic manifold evolution with negative curvature that induces contraction dynamics and logarithmic-time certification, supported by the hierarchical constraint architecture.

Load-bearing premise

That representing the forecasting dynamics on a hyperbolic manifold with negative curvature will inherently enforce physical consistency and permit formal certification in logarithmic time while allowing fully differentiable training that maintains accuracy.

What would settle it

An experiment showing a GeoCert prediction that violates a conservation law or an empirical measurement where the time to obtain a certificate grows faster than logarithmically with the size of the input data.

Figures

Figures reproduced from arXiv: 2604.23474 by Honggang Wen, Kwok-Yan Lam, Pietro Li\`o, Regina Zhang, Siu-Ming Yiu, Xiaofeng Liu, Zongru Li.

Figure 1
Figure 1. Figure 1: Overview of the GeoCert architecture for certified scientific forecasting. (a) Neural Spectral Processing. Multi￾scale time series inputs (e.g., energy, weather, traffic) are decomposed using an adaptive neural FFT transform with real-time frequency filter banks and multi-head spectral attention, followed by neural Laplace reconstruction. (b) Deep Constraint Net￾works. Forecasting occurs on a hyperbolic ma… view at source ↗
Figure 2
Figure 2. Figure 2: Geometric foundations of GEOCERT. The framework integrates spectral analysis with hyperbolic geometry for certified time series forecasting. (a–d) Architectural components: (a) Multi-scale spectral decomposition. (b) Hyperbolic constraint manifold Hd with curvature-encoded hierarchies. (c) Gradient-based constraint optimization. (d) Certified output with formal reliability guarantees. Empirical evidence. A… view at source ↗
Figure 3
Figure 3. Figure 3: Comprehensive performance characterization of GeoCert. (a) Prediction efficiency. Improvement from one-epoch to final training, measured by MSE and MAE. Green bars denote absolute gains and percentage labels show rapid convergence efficiency. (b) Scalability. Training time per epoch versus dataset cardinality, demonstrating near-logarithmic growth and com￾parable scaling to transformer variants. (c) Traini… view at source ↗
Figure 4
Figure 4. Figure 4: Robustness and certification reliability of GeoCert. (a) Noise robustness. Forecasting accuracy under additive Gaussian noise for varying noise levels (0–20 %). GeoCert (red) maintains the lowest mean squared error (MSE) across all noise intensities, with minimal variance, while transformer-based methods exhibit error amplification proportional to noise magnitude. (b) Temporal stability. Forecast trajector… view at source ↗
read the original abstract

Forecasting systems in science must be accurate, physically consistent, and certifiably reliable. Most existing models address prediction, constraint enforcement, and verification separately, limiting scalability and interpretability. We introduce GeoCert, a geometric AI framework that unifies forecasting, physical reasoning, and formal verification within a single differentiable computation. GeoCert formulates forecasting as evolution along a hyperbolic manifold, where negative curvature induces contraction dynamics, intrinsic robustness, and logarithmic-time certification. A hierarchical constraint architecture separates universal physical laws from domain-specific dynamics, enabling certified generalization across energy, climate, finance, and transportation systems. GeoCert achieves state-of-the-art accuracy while reducing computational cost by 97.5% and maintaining better certification rates. By embedding verification into the geometry of learning, GeoCert transforms forecasting from empirical approximation to formally verified inference, offering a scalable foundation for trustworthy, reproducible, and physically grounded scientific AI.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

4 major / 2 minor

Summary. The paper introduces GeoCert, a geometric AI framework that unifies forecasting, physical reasoning, and formal verification in a single differentiable computation. It formulates forecasting as evolution along a hyperbolic manifold where negative curvature is claimed to induce contraction dynamics, intrinsic robustness, and logarithmic-time certification. A hierarchical constraint architecture is said to separate universal physical laws from domain-specific dynamics, enabling certified generalization across domains such as energy, climate, finance, and transportation. The work asserts state-of-the-art accuracy, a 97.5% reduction in computational cost, and improved certification rates.

Significance. If the geometric construction can be shown to deliver the claimed contraction dynamics, physical invariance, and O(log T) certification without auxiliary mechanisms that compromise differentiability or accuracy, the result would represent a substantial advance in certified scientific AI. Embedding verification directly into manifold geometry could reduce the need for separate post-hoc verification pipelines and improve scalability for long-horizon forecasting tasks.

major comments (4)
  1. [Abstract] Abstract: The manuscript asserts SOTA accuracy, a 97.5% computational cost reduction, and improved certification rates, yet supplies no experimental results, tables, figures, datasets, baselines, or implementation details to substantiate any of these quantitative claims.
  2. [Abstract] Abstract: The central mechanism—that negative curvature on a hyperbolic manifold automatically produces contraction dynamics sufficient for both physical consistency (e.g., conservation of energy or mass) and logarithmic-time certification—is stated without any derivation, complexity analysis, or invariant-preserving construction. No equations demonstrate how geodesic flow or gyrovector operations enforce specific differential constraints or reduce certification complexity from linear in horizon/state dimension to O(log T).
  3. [Abstract] Abstract: The hierarchical constraint architecture is described as separating universal physical laws from domain-specific dynamics, but no architectural specification, layer definitions, equivariant constructions, or examples are provided showing how this separation is realized while preserving end-to-end differentiability and avoiding post-hoc projections that would undermine the claimed efficiency gains.
  4. [Abstract] Abstract: The claim that verification is 'embedded into the geometry of learning' to achieve formal certification relies on the manifold choice alone; no explicit mechanism (Killing fields, conserved quantities, or projection operators) is shown to guarantee that arbitrary geodesic evolution preserves the physical invariants required for consistency across the listed application domains.
minor comments (2)
  1. [Abstract] Abstract: The phrase 'logarithmic-time certification' is introduced without a precise definition of the certification procedure or a reference to the underlying verification algorithm.
  2. The manuscript would benefit from an explicit related-work discussion situating the hyperbolic-manifold approach relative to prior geometric deep learning and certified forecasting methods.

Simulated Author's Rebuttal

4 responses · 0 unresolved

We thank the referee for the detailed and constructive review. We address each major comment point by point below, clarifying the manuscript's content and committing to revisions where the presentation can be strengthened.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The manuscript asserts SOTA accuracy, a 97.5% computational cost reduction, and improved certification rates, yet supplies no experimental results, tables, figures, datasets, baselines, or implementation details to substantiate any of these quantitative claims.

    Authors: The abstract summarizes results whose supporting evidence appears in the experimental section of the full manuscript, including comparisons on energy, climate, finance, and transportation datasets against standard baselines. To improve self-containment and address the concern directly, we will revise the abstract to include a concise reference to these results and key quantitative findings. revision: yes

  2. Referee: [Abstract] Abstract: The central mechanism—that negative curvature on a hyperbolic manifold automatically produces contraction dynamics sufficient for both physical consistency (e.g., conservation of energy or mass) and logarithmic-time certification—is stated without any derivation, complexity analysis, or invariant-preserving construction. No equations demonstrate how geodesic flow or gyrovector operations enforce specific differential constraints or reduce certification complexity from linear in horizon/state dimension to O(log T).

    Authors: The abstract condenses the geometric argument; the full manuscript contains the relevant formulation in the methods section. We will add an explicit derivation subsection showing how negative curvature induces contraction via the Poincaré ball model and gyrovector operations, together with the complexity analysis establishing the O(log T) bound for certification. revision: yes

  3. Referee: [Abstract] Abstract: The hierarchical constraint architecture is described as separating universal physical laws from domain-specific dynamics, but no architectural specification, layer definitions, equivariant constructions, or examples are provided showing how this separation is realized while preserving end-to-end differentiability and avoiding post-hoc projections that would undermine the claimed efficiency gains.

    Authors: We will expand the manuscript with a precise architectural description, including layer definitions, equivariant constructions for universal laws, and concrete examples (e.g., energy conservation). The revision will also include a demonstration that the construction remains end-to-end differentiable without post-hoc projections. revision: yes

  4. Referee: [Abstract] Abstract: The claim that verification is 'embedded into the geometry of learning' to achieve formal certification relies on the manifold choice alone; no explicit mechanism (Killing fields, conserved quantities, or projection operators) is shown to guarantee that arbitrary geodesic evolution preserves the physical invariants required for consistency across the listed application domains.

    Authors: The manuscript will be revised to include the explicit mechanisms: Killing fields for conserved quantities and differentiable projection operators that keep geodesic flows on the invariant submanifolds. These additions will provide the formal guarantees for physical consistency across domains. revision: yes

Circularity Check

0 steps flagged

No significant circularity detected

full rationale

The provided abstract and context present GeoCert as a proposed modeling framework that adopts a hyperbolic manifold formulation to achieve contraction dynamics and certification properties. This constitutes a design choice rather than a derivation chain that reduces claims back to inputs by construction. No equations, self-citations, fitted parameters renamed as predictions, or load-bearing uniqueness theorems from prior author work are quoted or evident that would create circularity. The separation of universal laws via hierarchical constraints is asserted but not shown to loop back to the target results. The derivation remains self-contained as an architectural proposal.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 1 invented entities

Abstract-only review; no specific free parameters, axioms, or detailed invented entities can be extracted from the provided text. The framework relies on geometric properties of hyperbolic space and hierarchical constraints, but their implementation assumptions and any fitted values are not specified.

invented entities (1)
  • Hyperbolic manifold evolution for forecasting no independent evidence
    purpose: Induces contraction dynamics, intrinsic robustness, and logarithmic-time certification
    Core mechanism described in the abstract but no independent evidence or derivation details provided.

pith-pipeline@v0.9.0 · 5467 in / 1442 out tokens · 79324 ms · 2026-05-08T08:08:49.479782+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

51 extracted references · 8 canonical work pages · 1 internal anchor

  1. [1]

    Comparative analysis of deep learning architectures in solar power prediction

    Montaser Abdelsattar, Mohamed A Azim, Ahmed Abdel- Moety, and Ahmed Emad-Eldeen. Comparative analysis of deep learning architectures in solar power prediction. Scientific Reports, 15(1):31729, 2025

  2. [2]

    Data-driven prediction in dynamical systems: recent developments

    Amin Ghadami and Bogdan I Epureanu. Data-driven prediction in dynamical systems: recent developments. Philosophical Transactions of the Royal Society A, 380(2229):20210213, 2022

  3. [3]

    Developmental science 6 in 2025: A predictive review.Research in Human Devel- opment, 11(4):255–272, 2014

    Richard M Lerner, Jennifer P Agans, Lisette M DeS- ouza, and Rachel M Hershberg. Developmental science 6 in 2025: A predictive review.Research in Human Devel- opment, 11(4):255–272, 2014

  4. [4]

    Path dependence in energy systems and economic development.Nature Energy, 1(8):1–5, 2016

    Roger Fouquet. Path dependence in energy systems and economic development.Nature Energy, 1(8):1–5, 2016

  5. [5]

    Evaluation of climate models using palaeoclimatic data.Nature Climate Change, 2(6):417–424, 2012

    Pascale Braconnot, Sandy P Harrison, Masa Kageyama, Patrick J Bartlein, Valerie Masson-Delmotte, Ayako Abe- Ouchi, Bette Otto-Bliesner, and Yan Zhao. Evaluation of climate models using palaeoclimatic data.Nature Climate Change, 2(6):417–424, 2012

  6. [6]

    Molecular dynam- ics simulations in biology.Nature, 347(6294):631–639, 1990

    Martin Karplus and Gregory A Petsko. Molecular dynam- ics simulations in biology.Nature, 347(6294):631–639, 1990

  7. [7]

    Towards objective probabalistic climate forecasting.Nature, 419(6903):228–228, 2002

    Myles R Allen and David A Stainforth. Towards objective probabalistic climate forecasting.Nature, 419(6903):228–228, 2002

  8. [8]

    Modeling and forecasting natural gas demand in bangladesh.Energy policy, 39(11):7372–7380, 2011

    Zia Wadud, Himadri S Dey, Md Ashfanoor Kabir, and Shahidul I Khan. Modeling and forecasting natural gas demand in bangladesh.Energy policy, 39(11):7372–7380, 2011

  9. [9]

    Long time series of ocean wave prediction based on patchtst model

    Xinyu Huang, Jun Tang, and Yongming Shen. Long time series of ocean wave prediction based on patchtst model. Ocean Engineering, 301:117572, 2024

  10. [10]

    Scientific machine learning through physics– informed neural networks: Where we are and what’s next

    Salvatore Cuomo, Vincenzo Schiano Di Cola, Fabio Gi- ampaolo, Gianluigi Rozza, Maziar Raissi, and Francesco Piccialli. Scientific machine learning through physics– informed neural networks: Where we are and what’s next. Journal of Scientific Computing, 92(3):88, 2022

  11. [11]

    Physics- informed machine learning.Nature Reviews Physics, 3(6):422–440, 2021

    George Em Karniadakis, Ioannis G Kevrekidis, Lu Lu, Paris Perdikaris, Sifan Wang, and Liu Yang. Physics- informed machine learning.Nature Reviews Physics, 3(6):422–440, 2021

  12. [12]

    Neural ordinary differential equa- tions.Advances in neural information processing systems, 31, 2018

    Ricky TQ Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equa- tions.Advances in neural information processing systems, 31, 2018

  13. [13]

    Shiqi Wang, Huan Zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, and J Zico Kolter. Beta-crown: Effi- cient bound propagation with per-neuron split constraints for neural network robustness verification.Advances in neural information processing systems, 34:29909–29921, 2021

  14. [14]

    Bern-nn: Tight bound propagation for neural networks using bernstein polynomial interval arith- metic

    Wael Fatnassi, Haitham Khedr, Valen Yamamoto, and Yasser Shoukry. Bern-nn: Tight bound propagation for neural networks using bernstein polynomial interval arith- metic. InProceedings of the 26th ACM International Conference on Hybrid Systems: Computation and Con- trol, pages 1–11, 2023

  15. [15]

    Emer- gent hyperbolic network geometry.Scientific reports, 7(1):41974, 2017

    Ginestra Bianconi and Christoph Rahmede. Emer- gent hyperbolic network geometry.Scientific reports, 7(1):41974, 2017

  16. [16]

    iTransformer: Inverted Transformers Are Effective for Time Series Forecasting

    Yong Liu, Tengge Hu, Haoran Zhang, Haixu Wu, Shiyu Wang, Lintao Ma, and Mingsheng Long. itransformer: Inverted transformers are effective for time series fore- casting.arXiv preprint arXiv:2310.06625, 2023

  17. [17]

    Is mamba effective for time series forecasting?Neurocom- puting, 619:129178, 2025

    Zihan Wang, Fanheng Kong, Shi Feng, Ming Wang, Xi- aocui Yang, Han Zhao, Daling Wang, and Yifei Zhang. Is mamba effective for time series forecasting?Neurocom- puting, 619:129178, 2025

  18. [18]

    Crossformer: Trans- former utilizing cross-dimension dependency for multi- variate time series forecasting

    Yunhao Zhang and Junchi Yan. Crossformer: Trans- former utilizing cross-dimension dependency for multi- variate time series forecasting. InThe eleventh interna- tional conference on learning representations, 2022

  19. [19]

    arXiv preprint arXiv:2304.08424 , year=

    Abhimanyu Das, Weihao Kong, Andrew Leach, Shaan Mathur, Rajat Sen, and Rose Yu. Long-term forecast- ing with tide: Time-series dense encoder.arXiv preprint arXiv:2304.08424, 2023

  20. [20]

    Revisiting long-term time series forecasting: An investigation on linear mapping

    Zhe Li, Shiyi Qi, Yiduo Li, and Zenglin Xu. Revisiting long-term time series forecasting: An investigation on lin- ear mapping.arXiv preprint arXiv:2305.10721, 2023

  21. [21]

    Are transformers effective for time series forecasting? InPro- ceedings of the AAAI conference on artificial intelligence, volume 37, pages 11121–11128, 2023

    Ailing Zeng, Muxi Chen, Lei Zhang, and Qiang Xu. Are transformers effective for time series forecasting? InPro- ceedings of the AAAI conference on artificial intelligence, volume 37, pages 11121–11128, 2023

  22. [22]

    Fedformer: Frequency enhanced de- composed transformer for long-term series forecasting

    Tian Zhou, Ziqing Ma, Qingsong Wen, Xue Wang, Liang Sun, and Rong Jin. Fedformer: Frequency enhanced de- composed transformer for long-term series forecasting. InInternational conference on machine learning, pages 27268–27286. PMLR, 2022

  23. [23]

    Autoformer: Decomposition transform- ers with auto-correlation for long-term series forecast- ing.Advances in neural information processing systems, 34:22419–22430, 2021

    Haixu Wu, Jiehui Xu, Jianmin Wang, and Ming- sheng Long. Autoformer: Decomposition transform- ers with auto-correlation for long-term series forecast- ing.Advances in neural information processing systems, 34:22419–22430, 2021

  24. [24]

    Pint: Physics- informed neural time series models with applications to long-term inference on weatherbench 2m-temperature data, 2025

    Keonvin Park, Jisu Kim, and Jaemin Seo. Pint: Physics- informed neural time series models with applications to long-term inference on weatherbench 2m-temperature data, 2025

  25. [25]

    Fast and

    Kaidi Xu, Huan Zhang, Shiqi Wang, Yihan Wang, Suman Jana, Xue Lin, and Cho-Jui Hsieh. Fast and com- plete: Enabling complete neural network verification with rapid and massively parallel incomplete verifiers.arXiv preprint arXiv:2011.13824, 2020

  26. [26]

    Modeling long- and short-term temporal pat- terns with deep neural networks

    Guokun Lai, Wei-Cheng Chang, Yiming Yang, and Hanx- iao Liu. Modeling long- and short-term temporal pat- terns with deep neural networks. InThe 41st Interna- tional ACM SIGIR Conference on Research & Develop- ment in Information Retrieval, SIGIR ’18, page 95–104, New York, NY , USA, 2018. Association for Computing Machinery

  27. [27]

    SCINet: Time series modeling and forecasting with sample convolution and in- teraction

    Minhao LIU, Ailing Zeng, Muxi Chen, Zhijian Xu, Qi- uxia LAI, Lingna Ma, and Qiang Xu. SCINet: Time series modeling and forecasting with sample convolution and in- teraction. In Alice H. Oh, Alekh Agarwal, Danielle Bel- grave, and Kyunghyun Cho, editors,Advances in Neural Information Processing Systems, 2022

  28. [28]

    Timesnet: Temporal 2d- variation modeling for general time series analysis

    Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, and Mingsheng Long. Timesnet: Temporal 2d- variation modeling for general time series analysis. In The eleventh international conference on learning repre- sentations, 2022. 7

  29. [29]

    Attention is all you need.Advances in neural information processing systems, 30, 2017

    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need.Advances in neural information processing systems, 30, 2017

  30. [30]

    Pyraformer: Low-complexity pyramidal attention for long-range time series modeling and forecasting

    Shizhan Liu, Hang Yu, Cong Liao, Jianguo Li, Weiyao Lin, Alex X Liu, and Schahram Dustdar. Pyraformer: Low-complexity pyramidal attention for long-range time series modeling and forecasting. InInternational confer- ence on learning representations, 2021

  31. [31]

    Patro and Vijay S

    Badri N Patro and Vijay S Agneeswaran. Simba: Simpli- fied mamba-based architecture for vision and multivariate time series.arXiv preprint arXiv:2403.15360, 2024

  32. [32]

    Bi- mamba4ts: Bidirectional mamba for time series forecast- ing.arXiv preprint arXiv:2404.15772, 2024

    Aobo Liang, Xingguo Jiang, Yan Sun, and Chang Lu. Bi- mamba4ts: Bidirectional mamba for time series forecast- ing.arXiv preprint arXiv:2404.15772, 2024

  33. [33]

    Time-series forecasting with deep learning: a survey.Philosophical Transactions of the Royal Society A, 379(2194):20200209, 2021

    Bryan Lim and Stefan Zohren. Time-series forecasting with deep learning: a survey.Philosophical Transactions of the Royal Society A, 379(2194):20200209, 2021

  34. [34]

    Deep learn- ing for time series forecasting: a survey.Big Data, 9(1):3– 21, 2021

    Jos ´e F Torres, Dalil Hadjout, Abderrazak Sebaa, Fran- cisco Mart´ınez-´Alvarez, and Alicia Troncoso. Deep learn- ing for time series forecasting: a survey.Big Data, 9(1):3– 21, 2021

  35. [35]

    Fourcastnet: Accelerating global high-resolution weather forecasting using adaptive fourier neural operators

    Thorsten Kurth, Shashank Subramanian, Peter Harring- ton, Jaideep Pathak, Morteza Mardani, David Hall, An- drea Miele, Karthik Kashinath, and Anima Anandkumar. Fourcastnet: Accelerating global high-resolution weather forecasting using adaptive fourier neural operators. In Proceedings of the platform for advanced scientific com- puting conference, pages 1–11, 2023

  36. [36]

    Evaluation of precipita- tion forecasting base on graphcast over mainland china

    Zihuang Yan, Xianghui Lu, Lifeng Wu, Fa Liu, Rangjian Qiu, Yaokui Cui, and Xin Ma. Evaluation of precipita- tion forecasting base on graphcast over mainland china. Scientific Reports, 15(1):1–13, 2025

  37. [37]

    Neural dynamics on com- plex networks

    Chengxi Zang and Fei Wang. Neural dynamics on com- plex networks. InProceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pages 892–902, 2020

  38. [38]

    On the choice of interpolation scheme for neu- ral cdes.Transactions on Machine Learning Research, 2022(9), 2022

    James Morrill, Patrick Kidger, Lingyi Yang, and Terry Lyons. On the choice of interpolation scheme for neu- ral cdes.Transactions on Machine Learning Research, 2022(9), 2022

  39. [39]

    A data- driven method for ship motion forecast.Journal of Ma- rine Science and Engineering, 12(2):291, 2024

    Zhiqiang Jiang, Yongyan Ma, and Weijia Li. A data- driven method for ship motion forecast.Journal of Ma- rine Science and Engineering, 12(2):291, 2024

  40. [40]

    An edge preserving ibp based super resolution image reconstruction using p- spline and mucso-qpso algorithm.Microsystem Technolo- gies, 23(3):553–569, 2017

    Rajashree Nayak and Dipti Patra. An edge preserving ibp based super resolution image reconstruction using p- spline and mucso-qpso algorithm.Microsystem Technolo- gies, 23(3):553–569, 2017

  41. [41]

    Conformal prediction: a unified review of theory and new challenges.Bernoulli, 29(1):1–23, 2023

    Matteo Fontana, Gianluca Zeni, and Simone Vantini. Conformal prediction: a unified review of theory and new challenges.Bernoulli, 29(1):1–23, 2023

  42. [42]

    Confor- mal prediction: A gentle introduction.Foundations and Trends in Machine Learning, 16(4):494–591, 2023

    Anastasios N Angelopoulos and Stephen Bates. Confor- mal prediction: A gentle introduction.Foundations and Trends in Machine Learning, 16(4):494–591, 2023

  43. [43]

    Body composi- tion and energy expenditure in adolescents with cerebral palsy or myelodysplasia.Pediatric research, 29(1):70–77, 1991

    Linda G Bandini, Dale A Schoeller, Naomi K Fukagawa, Linda J Wykes, and William H Dietz. Body composi- tion and energy expenditure in adolescents with cerebral palsy or myelodysplasia.Pediatric research, 29(1):70–77, 1991

  44. [44]

    Graph time-series modeling in deep learning: a survey.ACM Transactions on Knowledge Discovery from Data, 18(5):1–35, 2024

    Hongjie Chen and Hoda Eldardiry. Graph time-series modeling in deep learning: a survey.ACM Transactions on Knowledge Discovery from Data, 18(5):1–35, 2024

  45. [45]

    Ming Jin, Huan Yee Koh, Qingsong Wen, Daniele Zam- bon, Cesare Alippi, Geoffrey I Webb, Irwin King, and Shirui Pan. A survey on graph neural networks for time series: Forecasting, classification, imputation, and anomaly detection.IEEE Transactions on Pattern Analy- sis and Machine Intelligence, 2024

  46. [46]

    Graph attention recurrent neural networks for corre- lated time series forecasting–full version.arXiv preprint arXiv:2103.10760, 2021

    Razvan-Gabriel Cirstea, Chenjuan Guo, and Bin Yang. Graph attention recurrent neural networks for corre- lated time series forecasting–full version.arXiv preprint arXiv:2103.10760, 2021. 8 Online Method Constraint Formulation and Manifold Embed- ding For the certified multi-scale forecasting setting, the model out- put ˆX=f θ (X)must satisfy a hierarchica...

  47. [47]

    Theorem 1(Convergence Guarantee).LetLbe an L–Lipschitz-continuous loss, optimized with learning rateη≤ 1/L within hyperbolic spaceH d

    Convergence of Hyperbolic Contraction Dynamics. Theorem 1(Convergence Guarantee).LetLbe an L–Lipschitz-continuous loss, optimized with learning rateη≤ 1/L within hyperbolic spaceH d. Then, GeoCert converges to anε–optimal solution in at most T=O log(1/ε) ρ(A) (32) iterations, whereρ(A)<1denotes the spectral radius of the hyperbolic contraction operator A....

  48. [48]

    Soundness and Certified Validity Bounds. Theorem 2(Soundness Guarantee).For any forecast ˆX asso- ciated with a geometric proof P, if dist Hd (P,Mvalid)<δ, then the total deviation from the constraint manifold is bounded by |C( ˆX)| ≤κ ε,(35) whereκ=1+Kδdepends on curvature K=−1and tolerance δ. Proof.Applying the triangle inequality and the Lipschitz con-...

  49. [49]

    Computational and Structural Complexity. Theorem 3(Complexity of Geometric Certification).For fore- cast horizon H, the proof sequence length|P|follows |P|=O(logH),(39) yielding an overall certification cost of O(HlogH)per se- quence. Proof.Geometric verification proceeds via a binary proof tree of depth⌈log 2 H⌉, with constant verification costc v per le...

  50. [50]

    Theorem 4(Necessity of Hyperbolic Embedding for Certified Forecasting).LetF:X→Ydenote a forecasting function sub- ject to constraint setC={c 1,

    Necessity of Hyperbolic Geometry. Theorem 4(Necessity of Hyperbolic Embedding for Certified Forecasting).LetF:X→Ydenote a forecasting function sub- ject to constraint setC={c 1, . . . ,cm}. For any Euclidean em- beddingφ E :Y→R d, there exists a feasible configuration such that no f E :φ E (X)→φ E (Y)can simultaneously satisfy: •E ∥F(x)−y∥ 2 ≤ε(accuracy) ...

  51. [51]

    Exponential Certification Efficiency. Theorem 5(Exponential Advantage over SMT-based Verifica- tion).For a problem with n series variables, T time steps, and m constraints, SMT-based certification requiresΩ(2 nT)oper- ations, while GeoCert achieves O(dlogn+TlogT)complexity with d≪n, resulting inΘ(2 n/logn)theoretical speedup. Proof.SMT solvers scale expon...