pith. machine review for the scientific record. sign in

arxiv: 2605.01568 · v1 · submitted 2026-05-02 · 💻 cs.CV

Recognition: unknown

Unifying Deep Stochastic Processes for Image Enhancement

Kamil Adamczewski, Karol Szczypkowski, Maciej Zi\k{e}ba, Rados{\l}aw Kuczba\'nski, Wojciech Koz{\l}owski

Pith reviewed 2026-05-09 14:05 UTC · model grok-4.3

classification 💻 cs.CV
keywords stochastic processesimage enhancementdiffusion modelsOrnstein-Uhlenbeckdiffusion bridgesstochastic differential equationsunification
0
0 comments X

The pith

All stochastic image enhancement methods arise from one shared stochastic differential equation differing mainly in drift, diffusion, terminals, and boundaries.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows that recent conditional stochastic methods for tasks like denoising and restoration fall into three families—unconditional diffusion models, Ornstein-Uhlenbeck processes, and diffusion bridges—that all derive from the same underlying continuous-time SDE. This common formulation reveals that apparent differences between methods reduce to choices of drift and diffusion coefficients, terminal distributions, and boundary conditions, while schedulers and samplers remain independent. A reader would care because the unification supports controlled experiments that hold model architecture and training protocol fixed, isolating the effect of the stochastic process itself. The resulting study finds no single family dominates across enhancement tasks. It also supplies a modular implementation that lets researchers swap components quickly and compare fairly.

Core claim

Unconditional diffusion models, Ornstein-Uhlenbeck processes, and diffusion bridges used for image enhancement all emerge from a single SDE. They differ principally in their drift and diffusion terms, terminal distributions, and boundary conditions. Schedulers and samplers act as orthogonal design choices. When the same network architectures and training protocols are applied across these families, no process type consistently outperforms the others; instead, specific choices within the SDE control most performance variation.

What carries the argument

The common stochastic differential equation (SDE) that parametrizes the three process families through choices of drift function, diffusion function, terminal distribution, and boundary conditions.

If this is right

  • Performance differences trace mainly to specific drift and diffusion terms plus boundary conditions rather than to the choice of process family.
  • Schedulers and samplers can be selected independently of the core SDE family.
  • No single family is superior across tasks once architectures are matched.
  • Key design choices become separable and can be optimized in isolation.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Researchers could now systematically combine elements such as the best drift from one family with the boundary condition from another to create hybrids.
  • The same SDE lens might be applied to related tasks like video restoration or medical-image denoising to test whether the same design patterns hold.
  • Theoretical work could derive which terminal distributions or boundary conditions best match particular degradation models, such as blur versus noise.

Load-bearing premise

The three families cover essentially all recent stochastic enhancement methods and that identical architectures plus training protocols isolate the stochastic-process choice without hidden implementation differences.

What would settle it

Discovery of a new stochastic enhancement method whose trajectory cannot be written as one of the three families inside the shared SDE, or a controlled re-run in which swapping only the process family reverses performance rankings in a manner not explained by the identified drift or boundary terms.

Figures

Figures reproduced from arXiv: 2605.01568 by Kamil Adamczewski, Karol Szczypkowski, Maciej Zi\k{e}ba, Rados{\l}aw Kuczba\'nski, Wojciech Koz{\l}owski.

Figure 1
Figure 1. Figure 1: 1D visualization of three classes of considered methods and how conditioning with y affects them. Left: Unconditional processes gradually perturb x0 into Gaussian noise N (0, 1), independently of y. Middle: Ornstein–Uhlenbeck processes converge to a terminal distribution centered at y with variance τ 2 . Right: Diffusion bridges start at x0 and are conditioned to reach y at the terminal time. For ease of v… view at source ↗
Figure 2
Figure 2. Figure 2 view at source ↗
Figure 3
Figure 3. Figure 3: Transition variance Var(xt|x0) for all considered Ornstein-Uhlenbeck processes. Left: Original temperatures τ and schedulers βt. Next: standarized τ with original schedulers. Middle to right: standarized τ with specific schedulers. We can see that ResShift has by far the highest temperature but IR-SDE reaches maximum level of noise faster than other methods. InDI has the lowest temperature and adds the noi… view at source ↗
Figure 4
Figure 4. Figure 4: Comparison of diffusion and three OU processes: InDI, IR-SDE, and ResShift with progressively higher temperature parameter τ , to study how temperature influences the collinearity of xt, y, xˆ0|t, and how this collinearity, in turn, affects the LPIPS metric. Left: collinearity of normalized vectors xt − y and xˆ0|t − xt. InDI, with the lowest τ , is trained to extrapolate the xt − y vector, especially at t… view at source ↗
Figure 5
Figure 5. Figure 5: Image quality for the InDI and ResShift models across different temperature settings. Image quality generally improves as the temperature increases. With the default temperature values, ResShift produces noticeably higher-quality results than InDI. The results are grouped by the number of diffusion steps used during inference. On the left, we used ancestral sampling, which is the original sampler in ResShi… view at source ↗
Figure 6
Figure 6. Figure 6: Visual comparison of image super resolution results performed on FFHQ dataset. Results were generated using ancestral sampling with 35 steps. All methods achieve similar visual quality, except BBDM and GOUB, which produce slightly blurred outputs. 24 view at source ↗
Figure 7
Figure 7. Figure 7: Visual comparison of image deraining on the Rain1400 dataset using ancestral sampling with 35 steps. All outputs appear similar, but diffusion and flow matching tend to leave slightly more visible traces of raindrops. 25 view at source ↗
Figure 8
Figure 8. Figure 8: Visual comparison of low-light image enhancement on the LOL dataset using ancestral sampling with 35 steps. Most methods produce images of similar quality, while InDI performs noticeably worse. All methods still struggle to fully recover the ground truth. 26 view at source ↗
Figure 9
Figure 9. Figure 9: Visual comparison of image colorization on ImageNet. Results were generated using ancestral sampling with 35 steps. In most cases, methods recover the correct canonical colors (e.g., green grass, blue sky, orange fruit). However there are some exceptions, such as DDPM producing green tires and most methods struggling with colors of the ladybug. Flow Matching and ResShift produces outputs with higher satura… view at source ↗
read the original abstract

Deep stochastic processes have recently become a central paradigm for image enhancement, with many methods explicitly conditioning the stochastic trajectory on the degraded input. However, the relationship between these conditional processes and standard diffusion models remains unclear. In this work, we introduce a unified perspective on stochastic image enhancement by classifying recent methods into three families of continuous-time processes: unconditional diffusion models, Ornstein-Uhlenbeck (OU) processes, and diffusion bridges. We show that all of these approaches arise from a common stochastic differential equation (SDE) formulation. This framework makes explicit that seemingly disparate methods differ primarily in their drift and diffusion terms, terminal distributions, and boundary conditions, while schedulers and samplers constitute orthogonal design choices. Leveraging this unification, we conduct a controlled empirical study across multiple image enhancement tasks using identical architectures and training protocols. Our results reveal no consistently dominant method; instead, we identify and disentangle the specific design choices that most strongly influence performance. Finally, we release ItoVision, a modular PyTorch library that implements the unified framework and enables rapid prototyping and fair comparison of stochastic image enhancement methods.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The manuscript claims that recent deep stochastic processes for image enhancement can be unified under a single SDE framework, with unconditional diffusion models, Ornstein-Uhlenbeck processes, and diffusion bridges arising as special cases that differ primarily in their drift and diffusion coefficients, terminal distributions, and boundary conditions (while schedulers and samplers are orthogonal). It supports the unification with a controlled empirical study that trains identical architectures under fixed protocols on multiple image enhancement tasks, finding no consistently dominant family, and releases the ItoVision PyTorch library to enable fair comparisons.

Significance. If the unification is complete and the empirical controls succeed in isolating SDE effects, the work supplies a useful organizing lens for the field, shifting attention from superficial methodological differences to the load-bearing choices in drift, diffusion, and conditioning. The modular library is a concrete asset for reproducibility and rapid experimentation.

major comments (1)
  1. [Empirical study] Empirical study section: The claim that 'identical architectures and training protocols' isolate the stochastic process itself requires explicit verification that the conditioning mechanism on the degraded input (concatenation, cross-attention, or time-dependent injection) is implemented identically across the three families. Different families conventionally employ distinct conditioning routes; any residual mismatch would undermine attribution of performance differences to drift/diffusion/terminal choices alone.
minor comments (2)
  1. [Abstract] Abstract: The phrase 'recent methods' is left broad; a parenthetical list of representative papers from each family would help readers assess coverage.
  2. Notation: Ensure that the common SDE (presumably Eq. (X) in the unification section) is written once with all variable terms (drift, diffusion, terminal) labeled so that later specializations can be referenced by name rather than re-derived.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript. We address the major comment point by point below and have made revisions to strengthen the empirical study section.

read point-by-point responses
  1. Referee: Empirical study section: The claim that 'identical architectures and training protocols' isolate the stochastic process itself requires explicit verification that the conditioning mechanism on the degraded input (concatenation, cross-attention, or time-dependent injection) is implemented identically across the three families. Different families conventionally employ distinct conditioning routes; any residual mismatch would undermine attribution of performance differences to drift/diffusion/terminal choices alone.

    Authors: We thank the referee for highlighting this important point regarding the isolation of SDE effects. We agree that explicit verification of the conditioning mechanism is necessary. In the unified framework, conditioning on the degraded input is treated as orthogonal to the choice of drift, diffusion, terminal distribution, and boundary conditions. In our controlled experiments, we implemented conditioning identically across all three families by concatenating the degraded input image channel-wise with the current noisy state at each timestep before feeding it into the shared network architecture. This concatenation-based injection was chosen for compatibility with the common SDE formulation and was applied uniformly in the code base for unconditional diffusion models, Ornstein-Uhlenbeck processes, and diffusion bridges. To make this explicit, we have revised the Empirical Study section to include a dedicated paragraph describing the conditioning implementation, along with a table confirming that the same mechanism (concatenation) was used for every family and task. We believe this revision fully addresses the concern and supports attribution of results to the SDE components. revision: yes

Circularity Check

0 steps flagged

No circularity: unification rests on standard SDE and new controlled experiments

full rationale

The paper's core derivation classifies methods into unconditional diffusion, OU processes, and diffusion bridges, then states they arise from one SDE by differing only in drift/diffusion terms, terminal distributions, and boundary conditions. This follows directly from the general Itô SDE form in stochastic calculus (no self-derived inputs or fitted parameters renamed as predictions). The empirical study runs fresh trainings under fixed architectures and protocols rather than reusing prior constants. No self-citation chains, uniqueness theorems from the authors, or ansatz smuggling appear as load-bearing steps. The framework is externally grounded and self-contained.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on standard SDE theory and the assumption that the three families cover the relevant methods; no new free parameters, axioms beyond standard math, or invented entities are introduced.

axioms (1)
  • standard math Standard existence and uniqueness results for solutions of stochastic differential equations
    Invoked when stating that all methods arise from a common SDE formulation.

pith-pipeline@v0.9.0 · 5511 in / 1274 out tokens · 28772 ms · 2026-05-09T14:05:34.415261+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

74 extracted references · 19 canonical work pages · 5 internal anchors

  1. [1]

    Stochastic Processes and their Applications , volume=

    Reverse-time diffusion equation models , author=. Stochastic Processes and their Applications , volume=. 1982 , publisher=

  2. [2]

    Advances in neural information processing systems , volume=

    Denoising diffusion probabilistic models , author=. Advances in neural information processing systems , volume=

  3. [4]

    Advances in Neural Information Processing Systems , volume=

    Schrödinger bridge flow for unpaired data translation , author=. Advances in Neural Information Processing Systems , volume=

  4. [5]

    Denoising Diffusion Implicit Models

    Denoising diffusion implicit models , author=. arXiv preprint arXiv:2010.02502 , year=

  5. [6]

    Flow Matching for Generative Modeling

    Flow matching for generative modeling , author=. arXiv preprint arXiv:2210.02747 , year=

  6. [7]

    Advances in Neural Information Processing Systems , volume=

    Resshift: Efficient diffusion model for image super-resolution by residual shifting , author=. Advances in Neural Information Processing Systems , volume=

  7. [9]

    Journal of Machine Learning Research , volume=

    Diffusion bridge mixture transports, Schrödinger bridge problems and generative modeling , author=. Journal of Machine Learning Research , volume=

  8. [10]

    European conference on computer vision , pages=

    Microsoft coco: Common objects in context , author=. European conference on computer vision , pages=. 2014 , organization=

  9. [11]

    GitHub repository , publisher =

    TorchVision: PyTorch's Computer Vision library , author =. GitHub repository , publisher =

  10. [12]

    International Conference on Medical image computing and computer-assisted intervention , pages=

    U-net: Convolutional networks for biomedical image segmentation , author=. International Conference on Medical image computing and computer-assisted intervention , pages=. 2015 , organization=

  11. [13]

    GitHub repository , howpublished =

    Patrick von Platen and Suraj Patil and Anton Lozhkov and Pedro Cuenca and Nathan Lambert and Kashif Rasul and Mishig Davaadorj and Dhruv Nair and Sayak Paul and William Berman and Yiyi Xu and Steven Liu and Thomas Wolf , title =. GitHub repository , howpublished =. 2022 , publisher =

  12. [14]

    International conference on machine learning , pages=

    Deep unsupervised learning using nonequilibrium thermodynamics , author=. International conference on machine learning , pages=. 2015 , organization=

  13. [15]

    , author=

    Estimation of non-normalized statistical models by score matching. , author=. Journal of Machine Learning Research , volume=

  14. [16]

    Neural computation , volume=

    A connection between score matching and denoising autoencoders , author=. Neural computation , volume=. 2011 , publisher=

  15. [17]

    Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

    High-resolution image synthesis with latent diffusion models , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

  16. [18]

    Advances in Neural Information Processing Systems , year=

    Global Structure-Aware Diffusion Process for Low-Light Image Enhancement , author=. Advances in Neural Information Processing Systems , year=

  17. [19]

    Advances in neural information processing systems , volume=

    Elucidating the design space of diffusion-based generative models , author=. Advances in neural information processing systems , volume=

  18. [20]

    Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow

    Flow straight and fast: Learning to generate and transfer data with rectified flow , author=. arXiv preprint arXiv:2209.03003 , year=

  19. [21]

    1984 , publisher=

    Classical potential theory and its probabilistic counterpart , author=. 1984 , publisher=

  20. [22]

    Proceedings of the IEEE/CVF conference on computer vision and pattern Recognition , pages=

    Bbdm: Image-to-image translation with brownian bridge diffusion models , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern Recognition , pages=

  21. [23]

    Denoising diffusion bridge models

    Denoising diffusion bridge models , author=. arXiv preprint arXiv:2309.16948 , year=

  22. [24]

    arXiv preprint arXiv:2110.11291 , year=

    Likelihood training of schrödinger bridge using forward-backward sdes theory , author=. arXiv preprint arXiv:2110.11291 , year=

  23. [25]

    arXiv preprint arXiv:2312.10299 , year=

    Image restoration through generalized ornstein-uhlenbeck bridge , author=. arXiv preprint arXiv:2312.10299 , year=

  24. [26]

    Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

    A style-based generator architecture for generative adversarial networks , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

  25. [27]

    Advances in neural information processing systems , volume=

    Diffusion models beat gans on image synthesis , author=. Advances in neural information processing systems , volume=

  26. [28]

    Proceedings of the IEEE conference on computer vision and pattern recognition , pages=

    The unreasonable effectiveness of deep features as a perceptual metric , author=. Proceedings of the IEEE conference on computer vision and pattern recognition , pages=

  27. [29]

    Deep retinex decomposition for low-light enhancement

    Deep retinex decomposition for low-light enhancement , author=. arXiv preprint arXiv:1808.04560 , year=

  28. [30]

    Proceedings of the 27th ACM international conference on multimedia , pages=

    Kindling the darkness: A practical low-light image enhancer , author=. Proceedings of the 27th ACM international conference on multimedia , pages=

  29. [31]

    Proceedings of the IEEE/CVF international conference on computer vision , pages=

    Retinexformer: One-stage retinex-based transformer for low-light image enhancement , author=. Proceedings of the IEEE/CVF international conference on computer vision , pages=

  30. [32]

    arXiv preprint arXiv:2305.10028 , year=

    Pyramid diffusion models for low-light image enhancement , author=. arXiv preprint arXiv:2305.10028 , year=

  31. [33]

    arXiv preprint arXiv:1501.00092 , year=

    Image Super-Resolution Using Deep Convolutional Networks, arXiv , author=. arXiv preprint arXiv:1501.00092 , year=

  32. [34]

    European conference on computer vision , pages=

    Accelerating the super-resolution convolutional neural network , author=. European conference on computer vision , pages=. 2016 , organization=

  33. [35]

    Proceedings of the IEEE conference on computer vision and pattern recognition , pages=

    Learning to see in the dark , author=. Proceedings of the IEEE conference on computer vision and pattern recognition , pages=

  34. [36]

    Proceedings of the 24th ACM international conference on Multimedia , pages=

    LIME: A method for low-light image enhancement , author=. Proceedings of the 24th ACM international conference on Multimedia , pages=

  35. [37]

    IEEE Transactions on image processing , volume=

    BM3D frames and variational image deblurring , author=. IEEE Transactions on image processing , volume=. 2011 , publisher=

  36. [38]

    et almbox. 2017. Photo-realistic single image super-resolution using a generative adversarial network , author=. Proceedings of the IEEE conference on computer vision and pattern recognition , year=

  37. [39]

    Proceedings of the European conference on computer vision (ECCV) workshops , pages=

    Esrgan: Enhanced super-resolution generative adversarial networks , author=. Proceedings of the European conference on computer vision (ECCV) workshops , pages=

  38. [40]

    IEEE transactions on image processing , volume=

    Enlightengan: Deep light enhancement without paired supervision , author=. IEEE transactions on image processing , volume=. 2021 , publisher=

  39. [41]

    International conference on articulated motion and deformable objects , pages=

    Image colorization using generative adversarial networks , author=. International conference on articulated motion and deformable objects , pages=. 2018 , organization=

  40. [42]

    IEEE transactions on pattern analysis and machine intelligence , volume=

    Image super-resolution via iterative refinement , author=. IEEE transactions on pattern analysis and machine intelligence , volume=. 2022 , publisher=

  41. [43]

    ACM SIGGRAPH 2022 conference proceedings , pages=

    Palette: Image-to-image diffusion models , author=. ACM SIGGRAPH 2022 conference proceedings , pages=

  42. [44]

    arXiv preprint arXiv:2502.05749 , year=

    UniDB: A Unified Diffusion Bridge Framework via Stochastic Optimal Control , author=. arXiv preprint arXiv:2502.05749 , year=

  43. [45]

    International Journal of Computer Vision , pages=

    Diffusion models for image restoration and enhancement: a comprehensive survey , author=. International Journal of Computer Vision , pages=. 2025 , publisher=

  44. [46]

    Improving and generalizing flow-based generative models with minibatch optimal transport

    Improving and generalizing flow-based generative models with minibatch optimal transport , author=. arXiv preprint arXiv:2302.00482 , year=

  45. [47]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    Residual denoising diffusion models , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  46. [48]

    Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

    Refusion: Enabling large-size realistic image restoration with latent-space diffusion models , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

  47. [49]

    International journal of computer vision , volume=

    Imagenet large scale visual recognition challenge , author=. International journal of computer vision , volume=. 2015 , publisher=

  48. [50]

    Proceedings of the IEEE conference on computer vision and pattern recognition , pages=

    Removing rain from single images via a deep detail network , author=. Proceedings of the IEEE conference on computer vision and pattern recognition , pages=

  49. [51]

    Advances in neural information processing systems , volume=

    Denoising diffusion restoration models , author=. Advances in neural information processing systems , volume=

  50. [52]

    Advances in Neural Information Processing Systems , volume=

    Snips: Solving noisy inverse problems stochastically , author=. Advances in Neural Information Processing Systems , volume=

  51. [53]

    Advances in Neural Information Processing Systems , volume=

    Improving diffusion models for inverse problems using manifold constraints , author=. Advances in Neural Information Processing Systems , volume=

  52. [54]

    Advances in neural information processing systems , volume=

    Diffusion normalizing flow , author=. Advances in neural information processing systems , volume=

  53. [55]

    International Conference on Learning Representations , year=

    Pseudoinverse-guided diffusion models for inverse problems , author=. International Conference on Learning Representations , year=

  54. [56]

    Stochastic differential equations: an introduction with applications , pages=

    Stochastic differential equations , author=. Stochastic differential equations: an introduction with applications , pages=. 2003 , publisher=

  55. [57]

    Advances in neural information processing systems , volume=

    Generative modeling by estimating gradients of the data distribution , author=. Advances in neural information processing systems , volume=

  56. [58]

    Pattern Recognition , volume=

    Implicit Image-to-Image Schrödinger Bridge for image restoration , author=. Pattern Recognition , volume=. 2025 , publisher=

  57. [59]

    Advances in Neural Information Processing Systems , volume=

    Cold diffusion: Inverting arbitrary image transforms without noise , author=. Advances in Neural Information Processing Systems , volume=

  58. [60]

    Dual diffusion implicit bridges for image-to-image translation.arXiv preprint arXiv:2203.08382,

    Dual diffusion implicit bridges for image-to-image translation , author=. arXiv preprint arXiv:2203.08382 , year=

  59. [61]

    Advances in neural information processing systems , volume=

    Are gans created equal? a large-scale study , author=. Advances in neural information processing systems , volume=

  60. [62]

    Unpaired image-to-image translation via neu- ral schr\” odinger bridge.arXiv preprint arXiv:2305.15086,

    Unpaired image-to-image translation via neural schrödinger bridge , author=. arXiv preprint arXiv:2305.15086 , year=

  61. [63]

    Advances in neural information processing systems , volume=

    Diffusion schrödinger bridge with applications to score-based generative modeling , author=. Advances in neural information processing systems , volume=

  62. [64]

    arXiv preprint arXiv:2502.02367 , year=

    Field matching: an electrostatic paradigm to generate and transfer data , author=. arXiv preprint arXiv:2502.02367 , year=

  63. [65]

    2014 , publisher=

    Brownian motion and stochastic calculus , author=. 2014 , publisher=

  64. [66]

    completely blind

    Making a “completely blind” image quality analyzer , author=. IEEE Signal processing letters , volume=. 2012 , publisher=

  65. [67]

    Advances in neural information processing systems , volume=

    Gans trained by a two time-scale update rule converge to a local nash equilibrium , author=. Advances in neural information processing systems , volume=

  66. [68]

    arXiv preprint arXiv:2212.00490 , year=

    Zero-shot image restoration using denoising diffusion null-space model , author=. arXiv preprint arXiv:2212.00490 , year=

  67. [69]

    Advances in Neural Information Processing Systems , volume=

    PGDiff: Guiding diffusion models for versatile face restoration via partial guidance , author=. Advances in Neural Information Processing Systems , volume=

  68. [70]

    CVPR , year=

    Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network , author=. CVPR , year=

  69. [71]

    K., Zhao, Z., Sj¨olund, J., and Sch¨on, T

    Image Restoration with Mean-Reverting Stochastic Differential Equations , author=. arXiv preprint arXiv:2301.11699 , year=

  70. [72]

    I2SB: Image-to-image Schrödinger bridge.arXiv preprint arXiv:2302.05872,

    I ^2 SB: Image-to-Image Schrödinger Bridge , author=. arXiv preprint arXiv:2302.05872 , year=

  71. [73]

    Score-Based Generative Modeling through Stochastic Differential Equations

    Score-Based Generative Modeling through Stochastic Differential Equations , author=. arXiv preprint arXiv:2011.13456 , year=

  72. [74]

    arXiv preprint arXiv:2303.11435 , year=

    Inversion by Direct Iteration: An Alternative to Denoising Diffusion for Image Restoration , author=. arXiv preprint arXiv:2303.11435 , year=

  73. [75]

    2023 IEEE , author=

    Scalable diffusion models with transformers. 2023 IEEE , author=. CVF International Conference on Computer Vision (ICCV) , volume=

  74. [76]

    NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study , year=

    Agustsson, Eirikur and Timofte, Radu , booktitle=. NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study , year=