pith. machine review for the scientific record. sign in

arxiv: 2605.05435 · v1 · submitted 2026-05-06 · 💻 cs.LG · cs.NA· math.NA

Recognition: unknown

Active Learning for Conditional Generative Compressed Sensing

Authors on Pith no claims yet

Pith reviewed 2026-05-08 17:21 UTC · model grok-4.3

classification 💻 cs.LG cs.NAmath.NA
keywords active learningconditional generative modelscompressed sensingChristoffel samplingstable recoveryprompt conditioningFourier measurementsimage recovery
0
0 comments X

The pith

Matching the prompt for sampling design with the recovery prompt preserves optimal recovery bounds in conditional generative compressed sensing for ReLU and Lipschitz generators.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that prompts play two distinct roles in conditional generative compressed sensing for recovering images from subsampled Fourier measurements: one shapes the sampling distribution via Christoffel sampling, and the other defines the recovery model. For generators satisfying ReLU or Lipschitz conditions, aligning these prompts keeps the Christoffel complexity constant the same as in standard generative compressed sensing, yielding stable recovery without extra terms. When the prompts differ, the bounds include an explicit compatibility penalty. Experiments with Stable Diffusion confirm that prompts reshape the sampling distributions and directly affect reconstruction quality. This positions prompts as independent design choices that separately control sensing, approximation, and recovery performance.

Core claim

For ReLU and Lipschitz conditional generators, prompt-matched Christoffel sampling retains the same Christoffel complexity constant as existing near-optimal generative compressed sensing theory, while prompt mismatch incurs an explicit compatibility penalty in the stable recovery bounds.

What carries the argument

The prompt-conditioned Christoffel sampling distribution, which selects measurements adapted to the generator's range and separates the sampling prompt from the recovery prompt.

If this is right

  • Recovery guarantees hold without degradation when the sampling prompt matches the recovery prompt.
  • Prompt mismatch adds a quantifiable penalty term that scales with the degree of incompatibility.
  • Prompts can be optimized separately for sampling design and model definition while maintaining computable distributions.
  • In practice, prompt choice influences both measurement distribution and image recovery accuracy, as seen in Stable Diffusion tests.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • This separation of prompt roles could support active learning methods that tune the sampling prompt to reduce the compatibility penalty without retraining the generator.
  • The framework may extend to conditional models in other domains like audio or video recovery under limited measurements.
  • Treating prompts as tunable variables suggests hybrid systems where sampling strategies adapt based on prompt compatibility metrics.

Load-bearing premise

The analysis assumes conditional generators satisfy ReLU or Lipschitz conditions and that the sampling prompt can be chosen independently while keeping the Christoffel distribution well-defined and computable.

What would settle it

An experiment showing that recovery error bounds remain unchanged under prompt mismatch for a ReLU generator, or that the Christoffel complexity constant increases even with matched prompts.

Figures

Figures reproduced from arXiv: 2605.05435 by Alexander DeLise, Nick Dexter.

Figure 1
Figure 1. Figure 1: (a) Empirical sampling distributions µec induced by the Christoffel function for prompts c ∈ {cuc, csb, cdb, cca}. Color indicates sampling probability over Fourier frequencies. (b) Empirical prompt compatibility factor Λ( e cr, cr, cs) (rows: sampling, columns: recovery). Diagonal entries that represent κe are minimal, while off-diagonal entries indicate the sampling-recovery mismatch penalty. so the prom… view at source ↗
Figure 2
Figure 2. Figure 2: Reconstruction quality across sampling percentages by sampling distribution for the in view at source ↗
Figure 3
Figure 3. Figure 3: Reconstruction quality across sampling percentages by sampling distribution for the out-of view at source ↗
Figure 4
Figure 4. Figure 4: Reconstruction quality across sampling percentages by sampling distribution for the in view at source ↗
Figure 5
Figure 5. Figure 5: Empirical sampling distributions µec induced by the Christoffel function for prompts c ∈ {csb, cdb, cca} across varying CFG scales. Color indicates sampling probability over Fourier frequencies. 20 25 30 PSNR (dB) µeuc µedb µesb µeca 0.00125 0.00250 0.00500 0.010 0.025 0.6 0.8 1.0 SSIM 0.00125 0.00250 0.00500 0.010 0.025 0.00125 0.00250 0.00500 0.010 0.025 0.00125 0.00250 0.00500 0.010 0.025 Sampling Ratio… view at source ↗
Figure 6
Figure 6. Figure 6: Reconstruction performance versus sampling percentage for in-range prompt-mismatched view at source ↗
Figure 7
Figure 7. Figure 7: Reconstruction performance versus sampling percentage for out-of-range recovery under view at source ↗
Figure 8
Figure 8. Figure 8: Reconstruction performance versus sampling percentage for in-range prompt-matched view at source ↗
Figure 9
Figure 9. Figure 9: Best recovered images in the in-range prompt-mismatched experiment (Section 4.2.1). view at source ↗
Figure 10
Figure 10. Figure 10: Best recovered images in the out-of-range experiment (Section 4.2.2). view at source ↗
Figure 11
Figure 11. Figure 11: Best recovered images in the in-range prompt-matched experiment (Appendix C). Ground view at source ↗
read the original abstract

Generative compressed sensing uses the range of a pretrained generator as a nonlinear model for recovering structured signals from limited measurements. We study a conditional version of this problem for image recovery from subsampled Fourier measurements using prompt-conditioned generative models. Our framework separates two roles of conditioning: the prompt used to design the sampling distribution and the prompt used to define the recovery model. For ReLU and Lipschitz conditional generators, we prove stable recovery bounds showing that prompt-matched Christoffel sampling retains the same Christoffel complexity constant as existing near-optimal generative compressed sensing theory, while prompt mismatch incurs an explicit compatibility penalty. Experiments with Stable Diffusion show that prompts meaningfully reshape Christoffel sampling distributions and influence image recovery. Overall, our results suggest that prompts should be treated as design variables with distinct effects on sensing, approximation, and recovery.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript develops a conditional generative compressed sensing framework for recovering images from subsampled Fourier measurements using prompt-conditioned generators. It explicitly separates the prompt used to design the Christoffel sampling distribution from the prompt defining the recovery model. For ReLU and Lipschitz conditional generators, stable recovery bounds are proved showing that prompt-matched Christoffel sampling preserves the Christoffel complexity constant of prior near-optimal generative CS theory, while prompt mismatch incurs an explicit compatibility penalty. Experiments with Stable Diffusion illustrate that prompts reshape the sampling distributions and influence recovery quality.

Significance. If the stated bounds hold without hidden assumptions, the work meaningfully extends generative compressed sensing to conditional models by positioning prompts as independent design variables for sensing versus recovery. This could enable more principled active sensing strategies when side information is available, with potential impact on applications requiring structured signal recovery under limited measurements.

major comments (2)
  1. [Theoretical results] Abstract and theoretical results: the claim that prompt-matched Christoffel sampling 'retains the same Christoffel complexity constant' is load-bearing for the central contribution, yet the abstract provides no explicit statement of the bound or the key steps showing the constant is identical rather than asymptotically equivalent; the full derivation (including any error terms for the ReLU/Lipschitz cases) must be supplied to confirm the extension is non-circular.
  2. [Experiments] Experiments: the abstract states that prompts 'meaningfully reshape Christoffel sampling distributions and influence image recovery,' but without reported quantitative metrics (e.g., recovery error, PSNR/SSIM tables, or ablation on matched vs. mismatched prompts) it is impossible to assess whether the compatibility penalty is observed in practice or remains purely theoretical.
minor comments (2)
  1. [Abstract] The title emphasizes 'Active Learning' while the abstract and claimed contributions focus on the separation of prompts and the recovery bounds; a brief sentence clarifying the active-learning interpretation of prompt selection would improve clarity.
  2. Notation for the compatibility penalty and the conditional Christoffel distribution should be introduced with a dedicated definition before the main theorem statements.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their careful reading and constructive feedback. We address each major comment below with clarifications from the manuscript and indicate planned revisions to enhance clarity and completeness.

read point-by-point responses
  1. Referee: [Theoretical results] Abstract and theoretical results: the claim that prompt-matched Christoffel sampling 'retains the same Christoffel complexity constant' is load-bearing for the central contribution, yet the abstract provides no explicit statement of the bound or the key steps showing the constant is identical rather than asymptotically equivalent; the full derivation (including any error terms for the ReLU/Lipschitz cases) must be supplied to confirm the extension is non-circular.

    Authors: We appreciate the referee drawing attention to the need for explicitness. The full derivations appear in Section 3. Theorem 3.1 (ReLU case) and Theorem 3.2 (Lipschitz case) prove that, when the prompt used for sampling matches the prompt used for recovery, the conditional Christoffel function reduces exactly to its unconditional counterpart; the compatibility penalty term vanishes identically, yielding the same complexity constant as in prior unconditional generative CS results with no additional error terms. The proofs proceed by direct substitution into the definition of the Christoffel measure and application of the same covering-number arguments used in the unconditional setting. To make this load-bearing claim transparent at the abstract level, we will revise the abstract to include a concise statement of the bound. revision: yes

  2. Referee: [Experiments] Experiments: the abstract states that prompts 'meaningfully reshape Christoffel sampling distributions and influence image recovery,' but without reported quantitative metrics (e.g., recovery error, PSNR/SSIM tables, or ablation on matched vs. mismatched prompts) it is impossible to assess whether the compatibility penalty is observed in practice or remains purely theoretical.

    Authors: We agree that quantitative metrics are necessary to substantiate the practical relevance of the compatibility penalty. The current experiments section provides visual evidence of how prompts alter the Christoffel sampling distributions and affect recovered images with Stable Diffusion, but does not include tabulated numerical results or explicit matched-versus-mismatched ablations. In the revision we will add a table reporting PSNR, SSIM, and relative recovery error across sampling rates for matched and mismatched prompt pairs, together with a short ablation subsection. This will allow readers to observe the penalty empirically. revision: yes

Circularity Check

0 steps flagged

No significant circularity; derivation extends external GCS theory

full rationale

The paper's central result is a set of stable recovery bounds that explicitly preserve the Christoffel complexity constant from prior near-optimal generative compressed sensing theory while adding an explicit penalty term for prompt mismatch. These bounds are stated under declared assumptions (ReLU/Lipschitz generators and independent sampling vs. recovery prompts) rather than being fitted to data or defined in terms of the target quantities. No equation or step reduces by construction to a self-defined parameter, a fitted input renamed as prediction, or a load-bearing self-citation chain; the analysis treats the existing GCS complexity constant as an external benchmark. The separation of prompt roles is presented as an explicit design choice, not smuggled in via ansatz or prior self-work.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The central claim rests on the generators being ReLU or Lipschitz continuous and on the existence of a well-defined Christoffel sampling distribution induced by the prompt-conditioned generator.

axioms (2)
  • domain assumption Conditional generators are ReLU networks or Lipschitz continuous functions
    Invoked to prove the stable recovery bounds for matched and mismatched prompts.
  • domain assumption Christoffel sampling distribution can be defined from the prompt-conditioned generator range
    Required for the sampling design part of the framework.

pith-pipeline@v0.9.0 · 5430 in / 1312 out tokens · 51063 ms · 2026-05-08T17:21:12.986232+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

53 extracted references · 3 canonical work pages · 1 internal anchor

  1. [1]

    David L. Donoho. Compressed sensing.IEEE Transactions on Information Theory, 52(4): 1289–1306, 2006

  2. [2]

    Prior image constrained compressed sensing (PICCS).Medical Physics, 35(2):660–663, 2008

    Guang-Hong Chen, Jie Tang, and Shuai Leng. Prior image constrained compressed sensing (PICCS).Medical Physics, 35(2):660–663, 2008

  3. [3]

    Donoho, and John M

    Michael Lustig, David L. Donoho, and John M. Pauly. Sparse MRI: The application of compressed sensing for rapid MR imaging.Magnetic Resonance in Medicine, 58(6):1182–1195, 2007

  4. [4]

    Duarte, Mark A

    Marco F. Duarte, Mark A. Davenport, Dharmpal Takhar, Jason N. Laska, Ting Sun, Kevin F. Kelly, and Richard G. Baraniuk. Single-pixel imaging via compressive sampling.IEEE Signal Processing Magazine, 25(2):83–91, 2008

  5. [5]

    Brady, and Aggelos K

    Xin Yuan, David J. Brady, and Aggelos K. Katsaggelos. Snapshot compressive imaging: Theory, algorithms, and applications.IEEE Signal Processing Magazine, 38(2):65–88, 2021

  6. [6]

    Herrmann, Michael P

    Felix J. Herrmann, Michael P. Friedlander, and Özgür Yılmaz. Fighting the curse of dimension- ality: Compressive sensing in exploration seismology.IEEE Signal Processing Magazine, 29 (3):88–100, 2012

  7. [7]

    Candès, Justin K

    Emmanuel J. Candès, Justin K. Romberg, and Terence Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information.IEEE Transactions on Information Theory, 52(2):489–509, 2006

  8. [8]

    Candès, Justin K

    Emmanuel J. Candès, Justin K. Romberg, and Terence Tao. Stable signal recovery from incomplete and inaccurate measurements.Communications on Pure and Applied Mathematics, 59(8):1207–1223, 2006

  9. [9]

    Regression shrinkage and selection via the Lasso.Journal of the Royal Statistical Society, Series B, 58(1):267–288, 1996

    Robert Tibshirani. Regression shrinkage and selection via the Lasso.Journal of the Royal Statistical Society, Series B, 58(1):267–288, 1996

  10. [10]

    CRC Press, 2015

    Trevor Hastie, Robert Tibshirani, and Martin Wainwright.Statistical Learning with Sparsity: The Lasso and Generalizations. CRC Press, 2015

  11. [11]

    Krishnaprasad

    Yagyensh Chandra Pati, Ramin Rezaiifar, and Perinkulam S. Krishnaprasad. Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. InAsilomar Conference on Signals, Systems, and Computers, 1993

  12. [12]

    Tropp and Anna C

    Joel A. Tropp and Anna C. Gilbert. Signal recovery from random measurements via orthogonal matching pursuit.IEEE Transactions on Information Theory, 53(12):4655–4666, 2007

  13. [13]

    Chen, David L

    Scott S. Chen, David L. Donoho, and Michael A. Saunders. Atomic decomposition by basis pursuit.SIAM Review, 43(1):129–159, 2001

  14. [14]

    Hansen.Compressive Imaging: Structure, Sampling, Learning

    Ben Adcock and Anders C. Hansen.Compressive Imaging: Structure, Sampling, Learning. Cambridge University Press, 2021. 10

  15. [15]

    Ashish Bora, Ajil Jalal, Eric Price, and Alexandros G. Dimakis. Compressed sensing using generative models. InInternational Conference on Machine Learning, 2017

  16. [16]

    Lower bounds for compressed sensing with generative models

    Akshay Kamath, Sushrut Karmalkar, and Eric Price. Lower bounds for compressed sensing with generative models. InNeurIPS 2019 Workshop on Solving Inverse Problems with Deep Networks, 2019

  17. [17]

    On the power of compressed sensing with generative models

    Akshay Kamath, Eric Price, and Sushrut Karmalkar. On the power of compressed sensing with generative models. InInternational Conference on Machine Learning, 2020

  18. [18]

    Information-theoretic lower bounds for compressive sensing with generative models.IEEE Journal on Selected Areas in Information Theory, 1(1): 292–303, 2020

    Zhaoqiang Liu and Jonathan Scarlett. Information-theoretic lower bounds for compressive sensing with generative models.IEEE Journal on Selected Areas in Information Theory, 1(1): 292–303, 2020

  19. [19]

    Dimakis, and Constantine Caramanis

    Ajil Jalal, Liu Liu, Alexandros G. Dimakis, and Constantine Caramanis. Robust compressed sensing using generative models. InAdvances in Neural Information Processing Systems, 2020

  20. [20]

    A unified framework for uniform signal recovery in nonlinear generative compressed sensing

    Junren Chen, Jonathan Scarlett, Michael Ng, and Zhaoqiang Liu. A unified framework for uniform signal recovery in nonlinear generative compressed sensing. InAdvances in Neural Information Processing Systems, 2023

  21. [21]

    Donoho, Juan M

    Michael Lustig, David L. Donoho, Juan M. Santos, and John M. Pauly. Compressed sensing MRI.IEEE Signal Processing Magazine, 25(2):72–82, 2008

  22. [22]

    Hansen, Clarice Poon, and Bogdan Roman

    Ben Adcock, Anders C. Hansen, Clarice Poon, and Bogdan Roman. Breaking the coherence barrier: A new theory for compressed sensing.Forum of Mathematics, Sigma, 5:e4, 2017

  23. [23]

    Compressive sensing with redundant dictionaries and structured measurements.SIAM Journal on Mathematical Analysis, 47(6): 4606–4629, 2015

    Felix Krahmer, Deanna Needell, and Rachel Ward. Compressive sensing with redundant dictionaries and structured measurements.SIAM Journal on Mathematical Analysis, 47(6): 4606–4629, 2015

  24. [24]

    Structure dependent sampling in compressed sensing: theoretical guarantees for tight frames.Applied and Computational Harmonic Analysis, 42(3):402–451, 2017

    Clarice Poon. Structure dependent sampling in compressed sensing: theoretical guarantees for tight frames.Applied and Computational Harmonic Analysis, 42(3):402–451, 2017

  25. [25]

    Cardenas, and Nick Dexter

    Ben Adcock, Juan M. Cardenas, and Nick Dexter. CS4ML: A general framework for active learning with arbitrary data based on Christoffel functions. InAdvances in Neural Information Processing Systems, 2023

  26. [26]

    Scott, Xia Sheng, and Özgür Yılmaz

    Aaron Berk, Simone Brugiapaglia, Yaniv Plan, Matthew S. Scott, Xia Sheng, and Özgür Yılmaz. Model-adapted Fourier sampling for generative compressed sensing. InNeurIPS 2023 Workshop on Deep Learning and Inverse Problems, 2023

  27. [27]

    Scott, and Özgür Yılmaz

    Aaron Berk, Simone Brugiapaglia, Babhru Joshi, Yaniv Plan, Matthew S. Scott, and Özgür Yılmaz. A coherence parameter characterizing generative compressed sensing with Fourier measurements.IEEE Journal on Selected Areas in Information Theory, 3(3):502–512, 2022

  28. [28]

    Scott, Xia Sheng, and Özgür Yılmaz

    Yaniv Plan, Matthew S. Scott, Xia Sheng, and Özgür Yılmaz. Denoising guarantees for optimized sampling schemes in compressed sensing.arXiv preprint arXiv:2504.01046, 2025

  29. [29]

    A statistical perspective of sampling scores for linear regression

    Siheng Chen, Rohan Varma, Aarti Singh, and Jelena Kovaˇccevi´c. A statistical perspective of sampling scores for linear regression. InIEEE International Symposium on Information Theory, 2016

  30. [30]

    Leveraged volume sampling for linear regression

    Michal Derezinski, Manfred KK Warmuth, and Daniel J Hsu. Leveraged volume sampling for linear regression. InAdvances in Neural Information Processing Systems, 2018

  31. [31]

    Generalized leverage scores: Ge- ometric interpretation and applications

    Bruno Ordozgoiti, Antonis Matakos, and Aristides Gionis. Generalized leverage scores: Ge- ometric interpretation and applications. InInternational Conference on Machine Learning, 2022

  32. [32]

    Fourier sparse leverage scores and approximate kernel learning

    Tamás Erdélyi, Cameron Musco, and Christopher Musco. Fourier sparse leverage scores and approximate kernel learning. InAdvances in Neural Information Processing Systems, 2020. 11

  33. [33]

    A universal sampling method for reconstructing signals with simple Fourier transforms

    Haim Avron, Michael Kapralov, Cameron Musco, Christopher Musco, Ameya Velingker, and Amir Zandieh. A universal sampling method for reconstructing signals with simple Fourier transforms. InProceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, pages 1051–1063, 2019

  34. [34]

    Compressive sampling

    Emmanuel J Candès. Compressive sampling. InProceedings of the International Congress of Mathematicians Madrid, pages 1433–1452, 2007

  35. [35]

    Stable and robust sampling strategies for compressive imaging

    Felix Krahmer and Rachel Ward. Stable and robust sampling strategies for compressive imaging. IEEE Transactions on Image Processing, 23(2):612–622, 2013

  36. [36]

    On variable density compressive sampling

    Gilles Puy, Pierre Vandergheynst, and Yves Wiaux. On variable density compressive sampling. IEEE Signal Processing Letters, 18(10):595–598, 2011

  37. [37]

    Bouman, and Jong Chul Ye

    Hyungjin Chung, Dohun Lee, Zihui Wu, Byung-Hoon Kim, Katherine L. Bouman, and Jong Chul Ye. ContextMRI: Enhancing compressed sensing MRI through metadata conditioning. arXiv preprint arXiv:2501.04284, 2025

  38. [38]

    Prompt-tuning latent diffusion models for inverse problems

    Hyungjin Chung, Jong Chul Ye, Peyman Milanfar, and Mauricio Delbracio. Prompt-tuning latent diffusion models for inverse problems. InInternational Conference on Machine Learning, 2024

  39. [39]

    Regularization by texts for latent diffusion inverse solvers

    Jeongsol Kim, Geon Yeong Park, Hyungjin Chung, and Jong Chul Ye. Regularization by texts for latent diffusion inverse solvers. InInternational Conference on Learning Representations, 2025

  40. [40]

    Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole

    Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021

  41. [41]

    Denoising diffusion probabilistic models

    Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems, 2020

  42. [42]

    Deep unsuper- vised learning using nonequilibrium thermodynamics

    Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsuper- vised learning using nonequilibrium thermodynamics. InInternational Conference on Machine Learning, 2015

  43. [43]

    Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio

    Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. InAdvances in Neural Information Processing Systems, 2014

  44. [44]

    Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

    Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks.arXiv preprint arXiv:1511.06434, 2015

  45. [45]

    Kingma and Max Welling

    Diederik P. Kingma and Max Welling. Auto-encoding variational Bayes. InInternational Conference on Learning Representations, 2014

  46. [46]

    Optimal sampling for least-squares approximation.Foundations of Computational Mathematics, 25:1975–2034, 2025

    Ben Adcock. Optimal sampling for least-squares approximation.Foundations of Computational Mathematics, 25:1975–2034, 2025

  47. [47]

    High- resolution image synthesis with latent diffusion models

    Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High- resolution image synthesis with latent diffusion models. InIEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022

  48. [48]

    Diffusers: State-of-the-art diffusion models

    Patrick von Platen, Suraj Patil, Anton Lozhkov, Pedro Cuenca, Nathan Lambert, Kashif Rasul, Mishig Davaadorj, Dhruv Nair, Sayak Paul, William Berman, Yiyi Xu, Steven Liu, and Thomas Wolf. Diffusers: State-of-the-art diffusion models. https://github.com/huggingface/ diffusers, 2022

  49. [49]

    Denoising diffusion implicit models

    Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In International Conference on Learning Representations, 2021

  50. [50]

    Classifier-free diffusion guidance

    Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. InNeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications, 2021. 12

  51. [51]

    Kingma and Jimmy Ba

    Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. InInterna- tional Conference on Learning Representations, 2015

  52. [52]

    Sunset Time Tropical Beach Sea with Coconut Palm Tree.https://www.magnific

    Magnific. Sunset Time Tropical Beach Sea with Coconut Palm Tree.https://www.magnific. com/free-photo/sunset-time-tropical-beach-sea-with-coconut-palm-tree_ 3531881.htm, n.d. Accessed: 2026-05-02

  53. [53]

    sunset beach

    Alireza Naderi and Yaniv Plan. Beyond independent measurements: General compressed sensing with GNN application. InNeurIPS 2021 Workshop on Deep Learning and Inverse Problems, 2021. 13 A Christoffel Sampling for Machine Learning (CS4ML) In this appendix, we fully outline the abstract CS4ML framework in [25] we harness for Christoffel sampling and bridge t...