pith. machine review for the scientific record. sign in

arxiv: 2602.07633 · v3 · submitted 2026-02-07 · 📊 stat.ML · cs.LG· stat.ME

Recognition: no theorem link

Flow-Based Conformal Predictive Distributions

Authors on Pith no claims yet

Pith reviewed 2026-05-16 06:07 UTC · model grok-4.3

classification 📊 stat.ML cs.LGstat.ME
keywords conformal predictionnonconformity scoregradient flowpredictive distributionsuncertainty quantificationhigh-dimensional samplingconformal sets
0
0 comments X

The pith

Any sufficiently regular differentiable nonconformity score induces a deterministic flow on the output space whose trajectories converge to the boundary of the conformal prediction set.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows that conformal prediction sets, which guarantee exact finite-sample coverage, can be represented and sampled through flows generated directly by the nonconformity score. These flows move points in the output space along trajectories that end precisely at the boundary of the desired prediction set. Mixing flows across different confidence levels then produces conformal predictive distributions whose quantile regions match the empirical sets. The construction is training-free and works in arbitrary dimensions, making the sets usable for downstream tasks such as sampling and probabilistic forecasting. An error bound decomposes the approximation error into contributions from the score, the base measure, and the flow discretization.

Core claim

The central claim is that any sufficiently regular differentiable nonconformity score defines a vector field on the output space, and the integral curves of this field converge to the level set that forms the boundary of the conformal prediction region. Integrating the flow therefore provides an exact, deterministic way to reach and sample the boundary without exhaustive search. Varying the confidence level across multiple such flows yields a full predictive distribution whose regions coincide with the original conformal sets.

What carries the argument

The deterministic flow on the output space generated by the nonconformity score, whose trajectories converge to the conformal boundary.

If this is right

  • High-dimensional or structured conformal sets become directly samplable by integrating the flow rather than by grid search or rejection sampling.
  • Conformal predictive distributions can be obtained by combining flows at multiple confidence levels, with quantile regions matching the empirical sets.
  • The approximation error of the resulting distributions decomposes into score-induced distortion, base-measure quality, and flow discretization error.
  • The same flow construction applies to tasks such as PDE inverse problems, precipitation downscaling, climate debiasing, and trajectory forecasting.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The flow view may allow conformal methods to be composed with existing dynamical-system or ODE-based samplers in downstream modeling pipelines.
  • Approximations for nondifferentiable scores could be developed by smoothing or by learning surrogate vector fields that mimic the target flow.
  • The convergence property suggests a way to certify coverage for flow-based generative models by checking whether their samples lie inside the induced conformal regions.

Load-bearing premise

The nonconformity score must be sufficiently regular and differentiable so that the induced flow exists and its trajectories converge to the conformal boundary.

What would settle it

A simulation in which trajectories generated by the flow from the nonconformity score fail to reach the boundary of the conformal set computed directly from the same data and score.

Figures

Figures reproduced from arXiv: 2602.07633 by Trevor Harris.

Figure 1
Figure 1. Figure 1: What do prediction sets look like for CFD simulations? Precipitation patterns? Tropical [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: The nonconformity flow (arrows) globally attracts towards the target level set ( [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Repulsive boundary explo￾ration under a kNN score (Section E.3). Because the flow Φα(t, y0) is everywhere co-linear with ∇S, the induced boundary measures {νx,α}α∈(0,1) are, in effect, projections of the base measure µx onto the level sets {∂Cα(x)}α∈(0,1) along integral curves of ∇S. Con￾sequently, not all boundary points are reachable from a given base measure: trajectories are attracted to nearby regions… view at source ↗
Figure 4
Figure 4. Figure 4: Top: Log absolute convergence error averaged across all α levels. Bottom: Spectral entropy of the generated samples, normalized by the spectral entropy of the test data. Figure 4a, shows the log score error after 20 integration steps broken out by score function (Appendix E.3 – E.4) with the black line indicating the target tolerance. Across all settings and tasks the conformal flow sampler converges to wi… view at source ↗
Figure 5
Figure 5. Figure 5: CPDs match spectral shape [PITH_FULL_IMAGE:figures/full_fig_p008_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Sample precip. intensity from CPD-L, MC Dropout, and the Conditional. Flow. CPDs [PITH_FULL_IMAGE:figures/full_fig_p008_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: CPDs can selectively emphasize any level range of the distribution and their utility depends [PITH_FULL_IMAGE:figures/full_fig_p009_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Sample realizations from a fixed α-level showing considerable heterogeneity [PITH_FULL_IMAGE:figures/full_fig_p015_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Sampled prediction bands (Sample), Reconformalized sampled bands (Re-conf), and RCPS [PITH_FULL_IMAGE:figures/full_fig_p016_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Samples from the 2D GP regression experiment. Column one shows realizations from the [PITH_FULL_IMAGE:figures/full_fig_p025_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Samples from the Elliptic PDE inversion experiment. Column one shows realizations [PITH_FULL_IMAGE:figures/full_fig_p026_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Samples from the Navier-Stokes approximation experiment. Column one shows realiza [PITH_FULL_IMAGE:figures/full_fig_p026_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Samples from the precipitation downscaling experiment. Column one shows realizations [PITH_FULL_IMAGE:figures/full_fig_p026_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Samples from the climate model debiasing experiment. Column one shows realizations [PITH_FULL_IMAGE:figures/full_fig_p027_14.png] view at source ↗
read the original abstract

Conformal prediction provides a distribution-free framework for uncertainty quantification via prediction sets with exact finite-sample coverage. In low dimensions these sets are easy to interpret, but in high-dimensional or structured output spaces they are difficult to represent and use, which can limit their ability to integrate with downstream tasks such as sampling and probabilistic forecasting. We show that any sufficiently regular differentiable nonconformity score induces a deterministic flow on the output space whose trajectories converge to the boundary of the corresponding conformal prediction set. This leads to a computationally efficient, training-free method for sampling conformal boundaries in arbitrary dimensions. Mixing across confidence levels yields conformal predictive distributions whose quantile regions coincide with the empirical conformal prediction sets. We provide an approximation bound decomposing CPD predictive error into score-induced distortion, base-measure quality, and gradient flow-induced distortion. We evaluate the approach on PDE inverse problems, precipitation downscaling, climate model debiasing, and hurricane trajectory forecasting.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper claims that any sufficiently regular differentiable nonconformity score induces a deterministic flow on the output space whose trajectories converge to the boundary of the corresponding conformal prediction set. This yields a training-free method for sampling conformal boundaries in arbitrary dimensions; mixing across levels produces conformal predictive distributions (CPDs) whose quantile regions coincide with empirical conformal sets. An approximation bound is given that decomposes CPD error into score-induced distortion, base-measure quality, and flow-induced distortion. The method is evaluated on PDE inverse problems, precipitation downscaling, climate model debiasing, and hurricane trajectory forecasting.

Significance. If the flow-convergence claim holds, the work supplies a practical, training-free route to representing and sampling high-dimensional conformal sets and to constructing CPDs that integrate directly with downstream sampling and forecasting tasks. The decomposed approximation bound is a constructive feature, and the breadth of the empirical evaluations (PDE, climate, forecasting) indicates potential applicability once the theoretical foundation is secured.

major comments (2)
  1. [§3] §3 (flow construction): the central claim that trajectories of the induced flow converge to the level set {s = q} is load-bearing for both the sampling procedure and the CPD construction. If the flow is the negative gradient flow dy/dt = −∇s(y), any interior critical point where ∇s = 0 and s < q is an equilibrium that traps trajectories inside the conformal set. Differentiability alone does not preclude such points; the manuscript must either prove that the stated regularity conditions exclude interior attractors, specify a modified flow that guarantees boundary convergence, or add an explicit assumption (e.g., strict quasiconvexity of s) that rules them out.
  2. [§4] §4 (approximation bound): the bound is stated at a high level as decomposing error into score distortion, base-measure quality, and gradient-flow distortion, yet the derivation is not supplied in sufficient detail to verify the triangle-inequality steps or the control of the flow-induced term. Without the explicit constants or the precise statement of the regularity assumptions used to bound the flow error, it is impossible to assess whether the bound is non-vacuous or whether it correctly accounts for possible trapping.
minor comments (2)
  1. [§2] Notation for the nonconformity score s and the quantile level q should be introduced once with a clear reference to the standard conformal-prediction definition to avoid ambiguity when the flow ODE is written.
  2. [Figures 2–4] Figure captions for the flow-trajectory plots should explicitly state the step-size schedule and the integration method used, so that readers can reproduce the visual evidence of boundary convergence.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the careful reading and constructive comments. We address each major comment below and will revise the manuscript accordingly to strengthen the theoretical foundations.

read point-by-point responses
  1. Referee: [§3] §3 (flow construction): the central claim that trajectories of the induced flow converge to the level set {s = q} is load-bearing for both the sampling procedure and the CPD construction. If the flow is the negative gradient flow dy/dt = −∇s(y), any interior critical point where ∇s = 0 and s < q is an equilibrium that traps trajectories inside the conformal set. Differentiability alone does not preclude such points; the manuscript must either prove that the stated regularity conditions exclude interior attractors, specify a modified flow that guarantees boundary convergence, or add an explicit assumption (e.g., strict quasiconvexity of s) that rules them out.

    Authors: We agree that the convergence of trajectories under the negative gradient flow requires explicit justification, since differentiability alone permits interior critical points. The manuscript's reference to 'sufficiently regular' conditions was intended to ensure the nonconformity score has no interior local minima below the target quantile, but this was not stated with sufficient precision. In the revision we will add an explicit assumption that the nonconformity score s is strictly quasiconvex. Under this assumption we will prove that the only critical point is the global minimizer and that all trajectories originating inside the level set {s < q} converge to the boundary without becoming trapped at interior equilibria. This keeps the original flow unchanged while rendering the claim rigorous. revision: yes

  2. Referee: [§4] §4 (approximation bound): the bound is stated at a high level as decomposing error into score distortion, base-measure quality, and gradient-flow distortion, yet the derivation is not supplied in sufficient detail to verify the triangle-inequality steps or the control of the flow-induced term. Without the explicit constants or the precise statement of the regularity assumptions used to bound the flow error, it is impossible to assess whether the bound is non-vacuous or whether it correctly accounts for possible trapping.

    Authors: We acknowledge that the derivation of the approximation bound was presented at too high a level. The bound is obtained by applying the triangle inequality to the distance (in total variation) between the conformal predictive distribution and the target distribution, separating the three error sources. In the revised manuscript we will supply the complete derivation, including the explicit constants (which depend on the Lipschitz constant of ∇s and the flow integration horizon) and the precise regularity conditions. With the strict quasiconvexity assumption introduced in response to the previous comment, the flow-induced distortion term will be controlled by the proven convergence rate to the boundary, ensuring the bound is non-vacuous and properly accounts for the absence of trapping. revision: yes

Circularity Check

0 steps flagged

No circularity: flow construction follows from differentiability assumptions without self-referential reduction

full rationale

The paper derives the existence of a deterministic flow from any sufficiently regular differentiable nonconformity score, with trajectories claimed to converge to the conformal boundary, and then constructs CPDs by mixing across levels. No equations reduce the flow or boundary convergence to a quantity defined in terms of itself, nor is any fitted parameter renamed as a prediction. The approximation bound explicitly decomposes error into score-induced distortion, base-measure quality, and gradient-flow distortion as separate terms. No load-bearing self-citations or uniqueness theorems imported from prior author work are invoked to force the central claim. The derivation is presented as a direct consequence of the regularity assumption on the score, making the construction self-contained against external benchmarks rather than tautological.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The central construction rests on the existence and convergence properties of the flow induced by a differentiable nonconformity score; no numerical free parameters or new physical entities are introduced in the abstract.

axioms (1)
  • domain assumption The nonconformity score is sufficiently regular and differentiable to induce a deterministic flow whose trajectories converge to the conformal boundary
    This is the explicit enabling premise stated in the abstract for the entire flow construction and subsequent CPD definition.
invented entities (1)
  • Conformal predictive distribution (CPD) no independent evidence
    purpose: A distribution obtained by mixing flows across confidence levels whose quantile regions coincide with empirical conformal prediction sets
    Defined directly from the flow construction; no independent evidence outside the method is provided in the abstract.

pith-pipeline@v0.9.0 · 5440 in / 1375 out tokens · 32359 ms · 2026-05-16T06:07:43.531165+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

47 extracted references · 47 canonical work pages · 4 internal anchors

  1. [1]

    Image super-resolution with guarantees via conformal generative models.arXiv e-prints, pages arXiv–2502, 2025

    Eduardo Adame, Daniel Csillag, and Guilherme Tegoni Goedert. Image super-resolution with guarantees via conformal generative models.arXiv e-prints, pages arXiv–2502, 2025

  2. [2]

    Conformal prediction bands for two-dimensional functional time series.Computational Statistics & Data Analysis, 187:107821, 2023

    Niccolò Ajroldi, Jacopo Diquigiovanni, Matteo Fontana, and Simone Vantini. Conformal prediction bands for two-dimensional functional time series.Computational Statistics & Data Analysis, 187:107821, 2023

  3. [3]

    Understanding and simplifying perceptual distances

    Dan Amir and Yair Weiss. Understanding and simplifying perceptual distances. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12226–12235, 2021

  4. [4]

    Conformal prediction: A gentle introduction

    Anastasios N Angelopoulos, Stephen Bates, et al. Conformal prediction: A gentle introduction. Foundations and trends® in machine learning, 16(4):494–591, 2023

  5. [5]

    Distribution-free, risk-controlling prediction sets.Journal of the ACM (JACM), 68(6):1–34, 2021

    Stephen Bates, Anastasios Angelopoulos, Lihua Lei, Jitendra Malik, and Michael Jordan. Distribution-free, risk-controlling prediction sets.Journal of the ACM (JACM), 68(6):1–34, 2021

  6. [6]

    Demystifying MMD GANs

    Mikołaj Bi´nkowski, Danica J Sutherland, Michael Arbel, and Arthur Gretton. Demystifying mmd gans.arXiv preprint arXiv:1801.01401, 2018

  7. [7]

    Mixture density networks

    Christopher M Bishop. Mixture density networks. 1994

  8. [8]

    Conformal prediction for natural language processing: A survey.Transactions of the Association for Computational Linguistics, 12:1497–1516, 2024

    Margarida Campos, António Farinhas, Chrysoula Zerva, Mário AT Figueiredo, and André FT Martins. Conformal prediction for natural language processing: A survey.Transactions of the Association for Computational Linguistics, 12:1497–1516, 2024

  9. [9]

    Neural ordinary differential equations.Advances in neural information processing systems, 31, 2018

    Ricky TQ Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations.Advances in neural information processing systems, 31, 2018

  10. [10]

    Large language model validity via enhanced conformal prediction methods.Advances in Neural Information Processing Systems, 37:114812– 114842, 2024

    John Cherian, Isaac Gibbs, and Emmanuel Candes. Large language model validity via enhanced conformal prediction methods.Advances in Neural Information Processing Systems, 37:114812– 114842, 2024

  11. [11]

    Distributional conformal prediction

    Victor Chernozhukov, Kaspar Wüthrich, and Yinchu Zhu. Distributional conformal prediction. Proceedings of the National Academy of Sciences, 118(48):e2107794118, 2021

  12. [12]

    Sinkhorn distances: Lightspeed computation of optimal transport.Advances in neural information processing systems, 26, 2013

    Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport.Advances in neural information processing systems, 26, 2013

  13. [13]

    Implicit quantile networks for distributional reinforcement learning

    Will Dabney, Georg Ostrovski, David Silver, and Rémi Munos. Implicit quantile networks for distributional reinforcement learning. InInternational conference on machine learning, pages 1096–1105. PMLR, 2018

  14. [14]

    Density estimation using Real NVP

    Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016

  15. [15]

    Conformal prediction bands for multivariate functional data.Journal of Multivariate Analysis, 189:104879, 2022

    Jacopo Diquigiovanni, Matteo Fontana, and Simone Vantini. Conformal prediction bands for multivariate functional data.Journal of Multivariate Analysis, 189:104879, 2022

  16. [16]

    The vendi score: A diversity evaluation metric for machine learning.arXiv preprint arXiv:2210.02410, 2022

    Dan Friedman and Adji Bousso Dieng. The vendi score: A diversity evaluation metric for machine learning.arXiv preprint arXiv:2210.02410, 2022

  17. [17]

    Dropout as a bayesian approximation: Representing model uncertainty in deep learning

    Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. Ininternational conference on machine learning, pages 1050–1059. PMLR, 2016

  18. [18]

    Strictly proper scoring rules, prediction, and estimation

    Tilmann Gneiting and Adrian E Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American statistical Association, 102(477):359–378, 2007

  19. [19]

    Locally adaptive conformal inference for operator models.arXiv preprint arXiv:2507.20975, 2025

    Trevor Harris and Yan Liu. Locally adaptive conformal inference for operator models.arXiv preprint arXiv:2507.20975, 2025. 10

  20. [20]

    Quantifying uncertainty in climate projections with conformal ensembles.arXiv preprint arXiv:2408.06642, 2024

    Trevor Harris and Ryan Sriver. Quantifying uncertainty in climate projections with conformal ensembles.arXiv preprint arXiv:2408.06642, 2024

  21. [21]

    Elastic depths for detecting shape anomalies in functional data.Technometrics, 63(4):466–476, 2021

    Trevor Harris, J Derek Tucker, Bo Li, and Lyndsay Shand. Elastic depths for detecting shape anomalies in functional data.Technometrics, 63(4):466–476, 2021

  22. [22]

    Academic press, 2013

    Morris W Hirsch, Stephen Smale, and Robert L Devaney.Differential equations, dynamical systems, and an introduction to chaos. Academic press, 2013

  23. [23]

    Fundamentals of digital image processing

    Anil K Jain. Fundamentals of digital image processing. 1989

  24. [24]

    Simple and scalable predictive uncertainty estimation using deep ensembles.Advances in neural information processing systems, 30, 2017

    Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles.Advances in neural information processing systems, 30, 2017

  25. [25]

    Distribution-free predictive inference for regression.Journal of the American Statistical Associ- ation, 113(523):1094–1111, 2018

    Jing Lei, Max G’Sell, Alessandro Rinaldo, Ryan J Tibshirani, and Larry Wasserman. Distribution-free predictive inference for regression.Journal of the American Statistical Associ- ation, 113(523):1094–1111, 2018

  26. [26]

    Flow Matching for Generative Modeling

    Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. Flow matching for generative modeling.arXiv preprint arXiv:2210.02747, 2022

  27. [27]

    Flow match- ing for generative modeling

    Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. Flow match- ing for generative modeling. In11th International Conference on Learning Representations, ICLR 2023, 2023

  28. [28]

    Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow

    Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow.arXiv preprint arXiv:2209.03003, 2022

  29. [29]

    Calibrated uncertainty quan- tification for operator learning via conformal prediction.arXiv preprint arXiv:2402.01960, 2024

    Ziqi Ma, Kamyar Azizzadenesheli, and Anima Anandkumar. Calibrated uncertainty quan- tification for operator learning via conformal prediction.arXiv preprint arXiv:2402.01960, 2024

  30. [30]

    Valid model-free spatial prediction.Journal of the American Statistical Association, 119(546):904–914, 2024

    Huiying Mao, Ryan Martin, and Brian J Reich. Valid model-free spatial prediction.Journal of the American Statistical Association, 119(546):904–914, 2024

  31. [31]

    Conformalized prediction of post-fault voltage trajectories using pre-trained and finetuned attention-driven neural operators.arXiv preprint arXiv:2410.24162, 2024

    Amirhossein Mollaali, Gabriel Zufferey, Gonzalo Constante-Flores, Christian Moya, Can Li, Guang Lin, and Meng Yue. Conformalized prediction of post-fault voltage trajectories using pre-trained and finetuned attention-driven neural operators.arXiv preprint arXiv:2410.24162, 2024

  32. [32]

    Beyond uncertainty sets: Leveraging optimal transport to extend conformal predictive distribution to multivariate settings.arXiv preprint arXiv:2511.15146, 2025

    Eugene Ndiaye. Beyond uncertainty sets: Leveraging optimal transport to extend conformal predictive distribution to multivariate settings.arXiv preprint arXiv:2511.15146, 2025

  33. [33]

    Estimating the mean and variance of the target proba- bility distribution

    David A Nix and Andreas S Weigend. Estimating the mean and variance of the target proba- bility distribution. InProceedings of 1994 ieee international conference on neural networks (ICNN’94), volume 1, pages 55–60. IEEE, 1994

  34. [34]

    A parametric texture model based on joint statistics of complex wavelet coefficients.International journal of computer vision, 40(1):49–70, 2000

    Javier Portilla and Eero P Simoncelli. A parametric texture model based on joint statistics of complex wavelet coefficients.International journal of computer vision, 40(1):49–70, 2000

  35. [35]

    Conformal language modeling.arXiv preprint arXiv:2306.10193, 2023

    Victor Quach, Adam Fisch, Tal Schuster, Adam Yala, Jae Ho Sohn, Tommi S Jaakkola, and Regina Barzilay. Conformal language modeling.arXiv preprint arXiv:2306.10193, 2023

  36. [36]

    CRC press, 1998

    Clark Robinson.Dynamical systems: stability, symbolic dynamics, and chaos. CRC press, 1998

  37. [37]

    American Mathematical Soc., 2012

    Rex Clark Robinson.An introduction to dynamical systems: continuous and discrete, volume 19. American Mathematical Soc., 2012

  38. [38]

    A tutorial on conformal prediction.Journal of Machine Learning Research, 9(3), 2008

    Glenn Shafer and Vladimir V ovk. A tutorial on conformal prediction.Journal of Machine Learning Research, 9(3), 2008. 11

  39. [39]

    How to trust your diffusion model: A convex optimization approach to conformal risk control

    Jacopo Teneggi, Matthew Tivnan, Web Stayman, and Jeremias Sulam. How to trust your diffusion model: A convex optimization approach to conformal risk control. InInternational Conference on Machine Learning, pages 33940–33960. PMLR, 2023

  40. [40]

    Universally consistent conformal predictive distributions

    Vladimir V ovk. Universally consistent conformal predictive distributions. Inconformal and probabilistic prediction and applications, pages 105–122. PMLR, 2019

  41. [41]

    Springer, 2005

    Vladimir V ovk, Alexander Gammerman, and Glenn Shafer.Algorithmic learning in a random world. Springer, 2005

  42. [42]

    Cross- conformal predictive distributions

    Vladimir V ovk, Ilia Nouretdinov, Valery Manokhin, and Alexander Gammerman. Cross- conformal predictive distributions. Inconformal and probabilistic prediction and applications, pages 37–51. PMLR, 2018

  43. [43]

    Nonparametric predictive distributions based on conformal prediction

    Vladimir V ovk, Jieli Shen, Valery Manokhin, and Min-ge Xie. Nonparametric predictive distributions based on conformal prediction. InConformal and probabilistic prediction and applications, pages 82–102. PMLR, 2017

  44. [44]

    Probabilis- tic conformal prediction using conditional random samples.arXiv preprint arXiv:2206.06584, 2022

    Zhendong Wang, Ruijiang Gao, Mingzhang Yin, Mingyuan Zhou, and David M Blei. Probabilis- tic conformal prediction using conditional random samples.arXiv preprint arXiv:2206.06584, 2022

  45. [45]

    Conformal bounds on full-reference image quality for imaging inverse problems.arXiv preprint arXiv:2505.09528, 2025

    Jeffrey Wen, Rizwan Ahmad, and Philip Schniter. Conformal bounds on full-reference image quality for imaging inverse problems.arXiv preprint arXiv:2505.09528, 2025

  46. [46]

    Generative conformal prediction with vectorized non- conformity scores.arXiv preprint arXiv:2410.13735, 2024

    Minxing Zheng and Shixiang Zhu. Generative conformal prediction with vectorized non- conformity scores.arXiv preprint arXiv:2410.13735, 2024. A Nonconformity velocity field We recall the setting of Section 3.1. Fix a test input x and prediction ˆy. Let S(·,ˆy) :Y →R be continuously differentiable and write ∇S(y) :=∇S(y,ˆy) . For a target miscoverage level...

  47. [47]

    distance-to-ˆy

    Pointwise convergence:Assume there exist m >0 and a neighborhood U of ∂Cα(x) such that ∥∇S(y)∥ 2 ≥m for all y∈U . Then y(t) converges to a unique limit point y∞ ∈∂C α(x)and ∥y(t)−y ∞∥2 ≤ 1 m |S(y0)−τ α|e −λt for sufficiently larget. Proof.Step 1 (score convergence).By the chain rule, ε′(t) =S ′(y(t)) =∇S(y(t)) ⊤y′(t) =∇S(y(t)) ⊤bvα(y(t)). Substituting the...