pith. machine review for the scientific record. sign in

arxiv: 1808.04560 · v1 · submitted 2018-08-14 · 💻 cs.CV

Recognition: unknown

Deep Retinex Decomposition for Low-Light Enhancement

Authors on Pith no claims yet
classification 💻 cs.CV
keywords decompositionenhancementilluminationlow-lightreflectanceimagelearnedadjustment
0
0 comments X
read the original abstract

Retinex model is an effective tool for low-light image enhancement. It assumes that observed images can be decomposed into the reflectance and illumination. Most existing Retinex-based methods have carefully designed hand-crafted constraints and parameters for this highly ill-posed decomposition, which may be limited by model capacity when applied in various scenes. In this paper, we collect a LOw-Light dataset (LOL) containing low/normal-light image pairs and propose a deep Retinex-Net learned on this dataset, including a Decom-Net for decomposition and an Enhance-Net for illumination adjustment. In the training process for Decom-Net, there is no ground truth of decomposed reflectance and illumination. The network is learned with only key constraints including the consistent reflectance shared by paired low/normal-light images, and the smoothness of illumination. Based on the decomposition, subsequent lightness enhancement is conducted on illumination by an enhancement network called Enhance-Net, and for joint denoising there is a denoising operation on reflectance. The Retinex-Net is end-to-end trainable, so that the learned decomposition is by nature good for lightness adjustment. Extensive experiments demonstrate that our method not only achieves visually pleasing quality for low-light enhancement but also provides a good representation of image decomposition.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 25 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Leveraging Multimodal Large Language Models for All-in-One Image Restoration via a Mixture of Frequency Experts

    cs.CV 2026-05 unverdicted novelty 8.0

    An MLLM-guided architecture with a mixture of frequency experts and relational alignment loss achieves state-of-the-art all-in-one image restoration, outperforming prior methods by up to 1.35 dB on the CDD11 dataset.

  2. Degradation-Aware Adaptive Context Gating for Unified Image Restoration

    cs.CV 2026-05 unverdicted novelty 7.0

    DACG-IR adds a lightweight degradation-aware module that generates prompts to adaptively gate attention temperature, output features, and spatial-channel fusion in an encoder-decoder network for unified image restoration.

  3. From Zero to Detail: A Progressive Spectral Decoupling Paradigm for UHD Image Restoration with New Benchmark

    cs.CV 2026-04 unverdicted novelty 7.0

    A new framework called ERR decomposes UHD image restoration into three frequency stages with specialized sub-networks and introduces the LSUHDIR benchmark dataset of over 82,000 images.

  4. M3D-Stereo: A Multiple-Medium and Multiple-Degradation Dataset for Stereo Image Restoration

    cs.CV 2026-04 accept novelty 7.0

    M3D-Stereo supplies 7904 aligned stereo pairs across four multi-degradation scenarios with six progressive levels and pixel-consistent ground truths to benchmark image restoration and stereo matching.

  5. Your Pre-trained Diffusion Model Secretly Knows Restoration

    cs.CV 2026-04 unverdicted novelty 7.0

    Pre-trained diffusion models inherently support image restoration that can be unlocked by optimizing prompt embeddings at the text encoder output using a diffusion bridge formulation, achieving competitive results on ...

  6. Beyond Ground-Truth: Leveraging Image Quality Priors for Real-World Image Restoration

    cs.CV 2026-03 unverdicted novelty 7.0

    IQPIR uses NR-IQA-derived quality scores to condition a Transformer and dual-branch codebook for perceptually superior real-world image restoration.

  7. IG-Diff: Complex Night Scene Restoration with Illumination-Guided Diffusion Model

    cs.CV 2026-05 unverdicted novelty 6.0

    IG-Diff adds an illumination-guided module to a diffusion model and supplies new paired datasets to restore images degraded by simultaneous low light and other factors while preserving texture.

  8. PVRF: All-in-one Adverse Weather Removal via Prior-modulated and Velocity-constrained Rectified Flow

    cs.CV 2026-05 unverdicted novelty 6.0

    PVRF combines zero-shot VLM-based weather perception with perception-adaptive rectified flow refinement to achieve all-in-one adverse weather removal with improved fidelity and cross-dataset generalization.

  9. Leveraging Multimodal Large Language Models for All-in-One Image Restoration via a Mixture of Frequency Experts

    cs.CV 2026-05 unverdicted novelty 6.0

    An MLLM-guided framework with fusion blocks and mixture-of-frequency-experts achieves new state-of-the-art performance on the CDD11 all-in-one restoration benchmark.

  10. SIMI: Self-information Mining Network for Low-light Image Enhancement

    cs.CV 2026-05 unverdicted novelty 6.0

    SIMI is an unsupervised low-light image enhancement network using bit-plane decomposition to mine self-information, reported to reach state-of-the-art performance on standard benchmarks.

  11. Beyond Pixel Fidelity: Minimizing Perceptual Distortion and Color Bias in Night Photography Rendering

    cs.CV 2026-04 unverdicted novelty 6.0

    pHVI-ISPNet achieves state-of-the-art perceptual quality in night photography rendering by combining HVI color space with wavelet feature propagation, sample-adaptive losses, and distribution-based color constancy on ...

  12. Reading in the Dark: Low-light Scene Text Recognition

    cs.CV 2026-04 unverdicted novelty 6.0

    Introduces LSTR and ESTR low-light text datasets and shows joint LLIE-OCR training outperforms standalone models.

  13. Frequency-Decomposed INR for NIR-Assisted Low-Light RGB Image Denoising

    cs.CV 2026-04 unverdicted novelty 6.0

    FDINR decomposes RGB-NIR pairs into frequency components via wavelets and employs dual-branch INR with cross-modal supervision and adaptive uncertainty loss to restore low-light images while enabling arbitrary-resolut...

  14. RHVI-FDD: A Hierarchical Decoupling Framework for Low-Light Image Enhancement

    cs.CV 2026-04 unverdicted novelty 6.0

    RHVI-FDD hierarchically decouples luminance-chrominance and then frequency components in low-light images to correct color, suppress noise, and preserve details better than prior methods.

  15. E-VLA: Event-Augmented Vision-Language-Action Model for Dark and Blurred Scenes

    cs.CV 2026-04 conditional novelty 6.0

    E-VLA integrates event streams directly into VLA models via lightweight fusion, raising Pick-Place success from 0% to 60-90% at 20 lux and from 0% to 20-25% under severe motion blur.

  16. M2Retinexformer: Multi-Modal Retinexformer for Low-Light Image Enhancement

    cs.CV 2026-05 unverdicted novelty 5.0

    M2Retinexformer improves low-light images by progressively refining RGB data with depth, luminance, and semantic modalities through cross-attention and adaptive gating, showing gains on LOL, SID, SMID, and SDSD benchmarks.

  17. Unifying Deep Stochastic Processes for Image Enhancement

    cs.CV 2026-05 unverdicted novelty 5.0

    Stochastic image enhancement methods are shown to be variants of a shared SDE differing in drift, diffusion, terminal distributions and boundary conditions, with controlled experiments revealing no single dominant fam...

  18. SmartPhotoCrafter: Unified Reasoning, Generation and Optimization for Automatic Photographic Image Editing

    cs.CV 2026-04 unverdicted novelty 5.0

    SmartPhotoCrafter performs automatic photographic image editing by coupling an Image Critic module that identifies deficiencies with a Photographic Artist module that generates edits, trained via multi-stage pretraini...

  19. Naka-GS: A Bionics-inspired Dual-Branch Naka Correction and Progressive Point Pruning for Low-Light 3DGS

    cs.CV 2026-04 unverdicted novelty 5.0

    NAKA-GS combines bionics-inspired Naka chroma correction with progressive point pruning to boost restoration quality and efficiency in low-light 3D Gaussian Splatting.

  20. Deep Light Pollution Removal in Night Cityscape Photographs

    cs.CV 2026-04 unverdicted novelty 5.0

    A deep learning method with an enhanced physical degradation model incorporating anisotropic light spread and hidden skyglow, trained via generative models and synthetic-real coupling, removes light pollution from nig...

  21. FLARE-BO: Fused Luminance and Adaptive Retinex Enhancement via Bayesian Optimisation for Low-Light Robotic Vision

    cs.CV 2026-04 unverdicted novelty 4.0

    FLARE-BO uses Bayesian optimization over an eight-parameter space to fuse luminance and adaptive Retinex techniques, reporting marked improvements on the LOL low-light dataset compared to untrained baselines.

  22. ELoG-GS: Dual-Branch Gaussian Splatting with Luminance-Guided Enhancement for Extreme Low-light 3D Reconstruction

    cs.CV 2026-04 unverdicted novelty 4.0

    ELoG-GS integrates geometry-aware initialization and luminance-guided photometric adaptation into Gaussian Splatting, achieving PSNR 18.66 and SSIM 0.69 on the NTIRE 2026 Track 1 low-light 3D reconstruction benchmark.

  23. Attention Is not Everything: Efficient Alternatives for Vision

    cs.CV 2026-04 unverdicted novelty 3.0

    A survey that taxonomizes non-Transformer vision models and evaluates their practical trade-offs across efficiency, scalability, and robustness.

  24. ELoG-GS: Dual-Branch Gaussian Splatting with Luminance-Guided Enhancement for Extreme Low-light 3D Reconstruction

    cs.CV 2026-04 unverdicted novelty 3.0

    ELoG-GS combines learning-based initialization and luminance-guided enhancement inside Gaussian Splatting to raise PSNR to 18.66 and SSIM to 0.69 on the NTIRE 2026 low-light 3D challenge.

  25. Low Light Image Enhancement Challenge at NTIRE 2026

    cs.CV 2026-04 unverdicted novelty 2.0

    NTIRE 2026 challenge report shows progress in low-light image enhancement via 22 submitted networks evaluated on a new dataset.