Single-shot HDR is achieved by conditioning a video diffusion model on an LDR input to generate an exposure bracket and fusing the bracket with per-pixel weights from a lightweight UNet.
Hdr-vdp-3: A multi-metric for predicting image differences, quality and contrast distortions in high dynamic range and regular content.arXiv preprint arXiv:2304.13625
8 Pith papers cite this work. Polarity classification is still indexing.
fields
cs.CV 8years
2026 8verdicts
UNVERDICTED 8representative citing papers
LatentHDR generates structurally consistent panoramic HDR images by producing one scene latent with a diffusion backbone then deterministically mapping it to multiple exposure latents via a lightweight conditional head.
ExpoCM enables fast one-step single-image HDR reconstruction via exposure-dependent perturbations and region-conditioned consistency trajectories derived from a probability flow ODE.
An exposure-decoupled modulo formulation and iteration-free diffusion-prior unwrapping enable 1000 FPS full-color HDR imaging on spike cameras while cutting bandwidth from 20 Gbps to 6 Gbps.
LumaFlux is a physically and perceptually guided diffusion transformer for SDR-to-HDR conversion that introduces PGA, PCM, and HDR Residual Coupler modules plus a new training corpus and benchmark, outperforming prior ITM methods.
HDR video generation is achieved by logarithmically encoding HDR imagery to align with pretrained generative model latents, enabling minimal fine-tuning and degradation-based inference of missing content.
DiffHDR converts LDR videos to HDR by formulating the task as generative radiance inpainting in a video diffusion model's latent space, using Log-Gamma encoding and synthesized training data to achieve better fidelity and stability than prior methods.
FDIM is a new hybrid feature-distance video quality metric trained on over 16k sequences that shows strong generalization and correlation with human judgments across ten unseen SDR/HDR datasets and diverse codecs.
citing papers explorer
-
Single-Shot HDR Recovery via a Video Diffusion Prior
Single-shot HDR is achieved by conditioning a video diffusion model on an LDR input to generate an exposure bracket and fusing the bracket with per-pixel weights from a lightweight UNet.
-
LatentHDR: Decoupling Exposure from Diffusion via Conditional Latent-to-Latent Mapping for Text/Image-to-Panoramic HDR
LatentHDR generates structurally consistent panoramic HDR images by producing one scene latent with a diffusion backbone then deterministically mapping it to multiple exposure latents via a lightweight conditional head.
-
ExpoCM: Exposure-Aware One-Step Generative Single-Image HDR Reconstruction
ExpoCM enables fast one-step single-image HDR reconstruction via exposure-dependent perturbations and region-conditioned consistency trajectories derived from a probability flow ODE.
-
High-Speed Full-Color HDR Imaging via Unwrapping Modulo-Encoded Spike Streams
An exposure-decoupled modulo formulation and iteration-free diffusion-prior unwrapping enable 1000 FPS full-color HDR imaging on spike cameras while cutting bandwidth from 20 Gbps to 6 Gbps.
-
LumaFlux: Lifting 8-Bit Worlds to HDR Reality with Physically-Guided Diffusion Transformers
LumaFlux is a physically and perceptually guided diffusion transformer for SDR-to-HDR conversion that introduces PGA, PCM, and HDR Residual Coupler modules plus a new training corpus and benchmark, outperforming prior ITM methods.
-
HDR Video Generation via Latent Alignment with Logarithmic Encoding
HDR video generation is achieved by logarithmically encoding HDR imagery to align with pretrained generative model latents, enabling minimal fine-tuning and degradation-based inference of missing content.
-
DiffHDR: Re-Exposing LDR Videos with Video Diffusion Models
DiffHDR converts LDR videos to HDR by formulating the task as generative radiance inpainting in a video diffusion model's latent space, using Log-Gamma encoding and synthesized training data to achieve better fidelity and stability than prior methods.
-
FDIM: A Feature-distance-based Generic Video Quality Metric for Versatile Codecs
FDIM is a new hybrid feature-distance video quality metric trained on over 16k sequences that shows strong generalization and correlation with human judgments across ten unseen SDR/HDR datasets and diverse codecs.