pith. machine review for the scientific record. sign in

arxiv: 2605.02627 · v1 · submitted 2026-05-04 · 💻 cs.CV

Recognition: 3 theorem links

· Lean Theorem

Rethinking Low-Light Image Enhancement: A Log-Domain Intensity--Chromaticity Decoupling Perspective

Authors on Pith no claims yet

Pith reviewed 2026-05-08 18:38 UTC · model grok-4.3

classification 💻 cs.CV
keywords low-light image enhancementlog-domain processingintensity chromaticity decouplingimage reconstruction constraintsnoise suppressionlow-light face detection
0
0 comments X

The pith

Decoupling intensity from chromaticity in log space with added reconstruction rules improves low-light image enhancement and reduces noise.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper argues that low-light enhancement can be reframed as separating brightness from color information after taking the logarithm of the image values. This separation lets the method adjust intensity independently while keeping color ratios stable. Explicit rules are then added to reconstruct the final image so that channel over-amplification and color noise are limited. Results on standard test sets show the approach matches or exceeds prior methods in brightness, detail, and color accuracy, and it also lifts accuracy in a following face-detection task. A reader would care because many real scenes are captured in poor light and current enhancement tools often trade one artifact for another.

Core claim

By moving to a log-domain representation that isolates intensity from chromaticity and then enforcing reconstruction constraints derived directly from that separation, the method suppresses abnormal amplification in individual color channels and chromatic noise that commonly appear in low-light enhancement.

What carries the argument

Log-domain intensity-chromaticity decoupling together with explicit reconstruction constraints derived from the decoupled form.

If this is right

  • Quantitative scores such as PSNR and SSIM rise on LOLv2-Real, MIT-Adobe FiveK, and LSRW.
  • Visual output shows fewer color shifts and less noise than methods that do not use the log-domain split.
  • Downstream face detection on DarkFace improves because the enhanced images contain cleaner features.
  • The same constraint logic can be inserted into other enhancement pipelines that currently suffer from channel imbalance.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same log-domain split could be tested on video sequences to see whether temporal consistency improves.
  • Medical or remote-sensing images taken under low light might benefit from the identical separation step.
  • If the constraints prove stable, they could replace hand-tuned regularization terms in many existing networks.

Load-bearing premise

The separation of intensity and chromaticity in log space will consistently prevent channel amplification and color noise in real low-light photos without creating other visible problems.

What would settle it

Finding a collection of low-light images where the method produces stronger color fringing or new noise patterns than a standard enhancement baseline would show the claim does not hold.

Figures

Figures reproduced from arXiv: 2605.02627 by Erbao Dong, Guangrui Bai, Wenhai Liu, Yahui Deng, Yifan Mei, Yuhan Chen, Yuze Qiu.

Figure 1
Figure 1. Figure 1: Visual comparison of low-light image enhancement results on representative scenes. Recent transformer- and diffusion-based methods in￾troduce long-range modeling and generative priors into LLIE [31, 5, 45, 15]. LLFormer and Retinexformer use transformer architectures for high-resolution or illumination￾guided enhancement, while Diff-Retinex and LightenDiffu￾sion adopt diffusion-based restoration. These met… view at source ↗
Figure 2
Figure 2. Figure 2: Visualization of log-chromaticity components for low-light inputs and normal-exposure references. The first two columns are from the MIT dataset (primarily illumination differences), and the last two columns are from the LOLv2 dataset (extremely low-light with strong noise). Zero-anchor property. For each pixel 𝑥, there exists at least one channel 𝑐 ⋆ such that 𝐼𝑐⋆ (𝑥) = 𝐼max(𝑥), (11) which yields 𝐶𝑐⋆ (𝑥) … view at source ↗
Figure 3
Figure 3. Figure 3: Framework of the proposed ICDNet. The input low-light image is transformed into a log-domain intensity–chromaticity decoupled representation, where the intensity component captures the absolute intensity scale and the chromaticity component encodes inter-channel relative ratios. The two components are processed by a dual-branch interaction backbone to predict intensity and chromaticity residuals. The enhan… view at source ↗
Figure 4
Figure 4. Figure 4: Visual comparison on extremely low-light images under severe noise conditions. 5.1. Implementation Details All experiments are conducted on a workstation with an AMD EPYC 7Y43 48-core CPU, 256 GB RAM, and four NVIDIA GeForce RTX 4090 GPUs. The software envi￾ronment is Ubuntu 22.04.5 LTS with PyTorch. Multi-GPU training is implemented using DataParallel. Input images are resized to 128 × 128 during training… view at source ↗
Figure 5
Figure 5. Figure 5: Visual comparisons and PSNR/SSIM results on representative extremely low-light scenes from the LOLv2 dataset view at source ↗
Figure 6
Figure 6. Figure 6: Visual comparisons and PSNR/SSIM results on representative extremely low-light scenes from LOLv2. Under more challenging conditions with severe under￾exposure and higher noise levels, performance differences become more pronounced. Some methods remain under￾enhanced, resulting in low visibility, while others amplify weak signals and introduce noise patterns or local artifacts. Transformer-based methods imp… view at source ↗
Figure 7
Figure 7. Figure 7: Visual comparisons and per-image PSNR/SSIM results on a high dynamic range scene. Input 5.3130 / 0.0863 CoLIE 9.4905 / 0.3354 EnlightenGAN 11.3766 / 0.5625 KIND++ 16.8081 / 0.7921 RUAS 6.7977 / 0.2338 SCI 8.0405 / 0.3424 SGZ 9.4187 / 0.4272 URetinex-Net 15.8685 / 0.8359 ZeroDCE 8.9652 / 0.4065 PairLIE 12.4111 / 0.7115 NoiSER 18.5992 / 0.6779 CLIP-LIT 7.9865 / 0.3497 LightenDiffusion 17.4306 / 0.8561 NeRCo … view at source ↗
Figure 8
Figure 8. Figure 8: Visual comparisons and per-image PSNR/SSIM results on a low-texture scene from LOLv2. Input 7.71 / 0.089 CoLIE 13.61 / 0.457 EnlightenGAN 15.51 / 0.576 KIND++ 17.24 / 0.704 LightenDiffusion 17.80 / 0.671 NeRCo 19.31 / 0.666 NoiSER 13.86 / 0.499 PairLIE 14.00 / 0.546 RUAS 8.87 / 0.184 SCI 10.04 / 0.289 SCLLLE 9.96 / 0.295 SCLM 14.69 / 0.569 SGZ 11.46 / 0.412 URetinex-Net 16.92 / 0.710 ZeroDCE 11.12 / 0.383 … view at source ↗
Figure 9
Figure 9. Figure 9: Visual comparisons and per-image PSNR/SSIM results on an extremely low-light color image. Guangrui Bai et al.: Preprint submitted to Elsevier Page 12 of 18 view at source ↗
Figure 10
Figure 10. Figure 10: Visual comparison of different ablation settings on LOLv2 view at source ↗
Figure 11
Figure 11. Figure 11: Visual comparison of different decoupled mapping strategies view at source ↗
Figure 12
Figure 12. Figure 12: Visual comparison of face detection results on a backlit image from DarkFace. PR curve, IoU = 0.3 PR curve, IoU = 0.5 PR curve, IoU = 0.7 view at source ↗
Figure 13
Figure 13. Figure 13: PR curves under different IoU thresholds on the DarkFace dataset. The curves report detection performance at IoU thresholds of 0.3, 0.5, and 0.7. shows clear advantages at IoU=0.3 and IoU=0.5, while maintaining reasonable performance at IoU=0.7. This result indicates that ICD is beneficial for low-light face detection, whereas high-precision localization under severe low-light degradation remains difficul… view at source ↗
read the original abstract

Explicit reconstruction constraints derived from the decoupled representation are further imposed to suppress abnormal channel amplification and chromatic noise. Experiments on LOLv2-Real, MIT-Adobe FiveK, and LSRW show that the proposed method achieves competitive or superior quantitative and visual performance, reaching 29.71 dB PSNR and 0.89 SSIM on LOLv2-Real. DarkFace experiments further indicate improved downstream face detection under low-light conditions. Code and pretrained models are available at: https://github.com/mubaisam/ICD.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes rethinking low-light image enhancement via a log-domain intensity-chromaticity decoupling perspective. From this decoupling, explicit reconstruction constraints are derived and imposed during enhancement to suppress abnormal channel amplification and chromatic noise. The method is evaluated on LOLv2-Real, MIT-Adobe FiveK, LSRW, and DarkFace, reporting competitive or superior quantitative results (e.g., 29.71 dB PSNR and 0.89 SSIM on LOLv2-Real) and qualitative improvements, plus gains in downstream face detection; code and models are released publicly.

Significance. If the decoupling and derived constraints prove to be the causal driver of artifact suppression, the work could offer a more interpretable and robust alternative to purely data-driven low-light enhancement methods, with benefits for real-world applications and downstream tasks. The public code release is a clear strength supporting reproducibility.

major comments (2)
  1. [§3] §3 (Method): The central claim rests on the log-domain decoupling yielding reconstruction constraints that reliably suppress chromatic noise and channel amplification without new artifacts, but the manuscript provides no ablation that isolates this component (e.g., training the same backbone with vs. without the derived constraints) to establish causality over network capacity or loss design.
  2. [§4] §4 (Experiments): Results on LOLv2-Real, MIT-Adobe FiveK, and LSRW report strong metrics, yet the benchmarks may share similar noise/lighting distributions; without cross-dataset controls or error analysis showing the decoupling generalizes beyond training statistics, the suppression guarantee remains tied to the evaluated distributions.
minor comments (2)
  1. [Abstract] The abstract states 'competitive or superior' performance; explicitly listing the top-3 baselines and their scores in the abstract or a summary table would improve clarity.
  2. Figure captions for qualitative results could include the specific failure modes (e.g., chromatic noise) being addressed in each example.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback and positive assessment of the work's potential. We address each major comment below and will incorporate revisions to strengthen the manuscript.

read point-by-point responses
  1. Referee: [§3] §3 (Method): The central claim rests on the log-domain decoupling yielding reconstruction constraints that reliably suppress chromatic noise and channel amplification without new artifacts, but the manuscript provides no ablation that isolates this component (e.g., training the same backbone with vs. without the derived constraints) to establish causality over network capacity or loss design.

    Authors: We agree that an explicit ablation isolating the derived reconstruction constraints is required to establish causality. In the revised manuscript we will add a controlled ablation using the identical backbone and loss design, comparing performance with and without the explicit constraints. This will quantify their specific contribution to suppressing abnormal channel amplification and chromatic noise, separate from network capacity effects. revision: yes

  2. Referee: [§4] §4 (Experiments): Results on LOLv2-Real, MIT-Adobe FiveK, and LSRW report strong metrics, yet the benchmarks may share similar noise/lighting distributions; without cross-dataset controls or error analysis showing the decoupling generalizes beyond training statistics, the suppression guarantee remains tied to the evaluated distributions.

    Authors: We acknowledge the value of stronger generalization evidence. Although the three benchmarks differ in capture conditions and noise profiles, we will add cross-dataset experiments (training on one and testing on the others) together with error analysis in the revision. These additions will better demonstrate that the intensity-chromaticity decoupling and constraints generalize beyond any single training distribution. revision: yes

Circularity Check

0 steps flagged

No significant circularity; derivation is self-contained.

full rationale

The paper proposes a log-domain intensity-chromaticity decoupling as a modeling perspective, derives explicit reconstruction constraints from that representation, and applies them within an enhancement network. These steps constitute an original modeling choice rather than a redefinition or fit that forces the target outputs. Performance claims rest on external benchmark results (LOLv2-Real, MIT-Adobe FiveK, LSRW, DarkFace) that are not statistically entailed by the decoupling definition itself. No self-citation chains, fitted parameters renamed as predictions, or ansatzes smuggled via prior work appear in the provided derivation outline. The method therefore remains non-circular by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the assumption that log-domain decoupling provides a useful separation for imposing effective reconstruction constraints; no explicit free parameters, axioms, or invented entities are detailed in the provided abstract.

axioms (1)
  • domain assumption Log-domain representation enables effective decoupling of intensity and chromaticity for low-light enhancement
    Invoked as the foundational perspective of the method.

pith-pipeline@v0.9.0 · 5403 in / 1224 out tokens · 54035 ms · 2026-05-08T18:38:52.566977+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

57 extracted references · 2 canonical work pages

  1. [1]

    Recov- ering intrinsic scene characteristics

    Barrow, H., Tenenbaum, J., Hanson, A., Riseman, E., 1978. Recov- ering intrinsic scene characteristics. Comput. vis. syst 2, 2

  2. [2]

    Learning photographicglobaltonaladjustmentwithadatabaseofinput/output image pairs, in: The Twenty-Fourth IEEE Conference on Computer Vision and Pattern Recognition

    Bychkovsky, V., Paris, S., Chan, E., Durand, F., 2011a. Learning photographicglobaltonaladjustmentwithadatabaseofinput/output image pairs, in: The Twenty-Fourth IEEE Conference on Computer Vision and Pattern Recognition. Guangrui Bai et al.:Preprint submitted to Elsevier Page 16 of 18 Knowledge-Based Systems

  3. [3]

    Learning photographic global tonal adjustment with a database of input/output image pairs, in: CVPR 2011, IEEE

    Bychkovsky, V., Paris, S., Chan, E., Durand, F., 2011b. Learning photographic global tonal adjustment with a database of input/output image pairs, in: CVPR 2011, IEEE. pp. 97–104

  4. [4]

    Past, present, and future of simultaneouslocalizationandmapping:Towardtherobust-perception age

    Cadena, C., Carlone, L., Carrillo, H., Latif, Y., Scaramuzza, D., Neira, J., Reid, I., Leonard, J.J., 2017. Past, present, and future of simultaneouslocalizationandmapping:Towardtherobust-perception age. IEEE Transactions on robotics 32, 1309–1332

  5. [5]

    Retinexformer:One-stageretinex-basedtransformerforlow-lightim- age enhancement, in: Proceedings of the IEEE/CVF international conference on computer vision, pp

    Cai, Y., Bian, H., Lin, J., Wang, H., Timofte, R., Zhang, Y., 2023. Retinexformer:One-stageretinex-basedtransformerforlow-lightim- age enhancement, in: Proceedings of the IEEE/CVF international conference on computer vision, pp. 12504–12513

  6. [6]

    Learning to see in the dark,in:ProceedingsoftheIEEEconferenceoncomputervisionand pattern recognition, pp

    Chen, C., Chen, Q., Xu, J., Koltun, V., 2018. Learning to see in the dark,in:ProceedingsoftheIEEEconferenceoncomputervisionand pattern recognition, pp. 3291–3300

  7. [7]

    Fast context-based low-light image enhancement via neural implicit rep- resentations,in:EuropeanConferenceonComputerVision,Springer

    Chobola, T., Liu, Y., Zhang, H., Schnabel, J.A., Peng, T., 2024. Fast context-based low-light image enhancement via neural implicit rep- resentations,in:EuropeanConferenceonComputerVision,Springer. pp. 413–430

  8. [8]

    Cui, Z., Li, K., Gu, L., Su, S., Gao, P., Jiang, Z., Qiao, Y., Harada, T., 2022. You only need 90k parameters to adapt light: a light weight transformer for image enhancement and exposure correction, in: 33rd British Machine Vision Conference 2022, BMVC 2022, London, UK, November 21-24, 2022, BMVA Press. URL:https: //bmvc2022.mpi-inf.mpg.de/0238.pdf

  9. [9]

    Practical poissonian-gaussian noise modeling and fitting for single-image raw- data

    Foi,A.,Trimeche,M.,Katkovnik,V.,Egiazarian,K.,2008. Practical poissonian-gaussian noise modeling and fitting for single-image raw- data. IEEE transactions on image processing 17, 1737–1754

  10. [10]

    Learning a simple low-light image enhancer from paired low-light instances, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp

    Fu, Z., Yang, Y., Tu, X., Huang, Y., Ding, X., Ma, K.K., 2023. Learning a simple low-light image enhancer from paired low-light instances, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 22252–22261

  11. [11]

    Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., Cong, R.,

  12. [12]

    1780–1789

    Zero-reference deep curve estimation for low-light image en- hancement,in:ProceedingsoftheIEEE/CVFconferenceoncomputer vision and pattern recognition, pp. 1780–1789

  13. [13]

    Lime:Low-lightimageenhancement via illumination map estimation

    Guo,X.,Li,Y.,Ling,H.,2016. Lime:Low-lightimageenhancement via illumination map estimation. IEEE Transactions on image pro- cessing 26, 982–993

  14. [14]

    R2rnet: Low-light image enhancement via real-low to real-normal network.JournalofVisualCommunicationandImageRepresentation 90, 103712

    Hai, J., Xuan, Z., Yang, R., Hao, Y., Zou, F., Lin, F., Han, S., 2023. R2rnet: Low-light image enhancement via real-low to real-normal network.JournalofVisualCommunicationandImageRepresentation 90, 103712

  15. [15]

    Global structure-aware diffusion process for low-light image enhancement

    Hou, J., Zhu, Z., Hou, J., Liu, H., Zeng, H., Yuan, H., 2023. Global structure-aware diffusion process for low-light image enhancement. Advances in Neural Information Processing Systems 36, 79734– 79747

  16. [16]

    Lightendiffusion: Unsupervisedlow-lightimageenhancementwithlatent-retinexdiffu- sion models, in: European Conference on Computer Vision

    Jiang, H., Luo, A., Liu, X., Han, S., Liu, S., 2024. Lightendiffusion: Unsupervisedlow-lightimageenhancementwithlatent-retinexdiffu- sion models, in: European Conference on Computer Vision

  17. [17]

    Enlightengan: Deep light enhancement without paired supervision

    Jiang, Y., Gong, X., Liu, D., Cheng, Y., Fang, C., Shen, X., Yang, J., Zhou, P., Wang, Z., 2021. Enlightengan: Deep light enhancement without paired supervision. IEEE transactions on image processing 30, 2340–2349

  18. [18]

    A multi-scale retinex for bridging the gap between color images and the human observation of scenes

    Jobson, D.J., ur Rahman, Z., Woodell, G.A., 1997. A multi-scale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Processing 6, 965–976

  19. [19]

    Lightnessandretinextheory

    Land,E.H.,McCann,J.J.,1971. Lightnessandretinextheory. Journal of the Optical Society of America 61, 1–11. doi:10.1364/JOSA.61. 000001

  20. [20]

    Learning to enhance low-light image via zero-reference deep curve estimation

    Li, C., Guo, C., Loy, C.C., 2021. Learning to enhance low-light image via zero-reference deep curve estimation. IEEE transactions on pattern analysis and machine intelligence 44, 4225–4238

  21. [21]

    Semanticallycontrastivelearningforlow-lightimage enhancement, in: Proceedings of the AAAI Conference on Artificial Intelligence, pp

    Liang, D., Li, L., Wei, M., Yang, S., Zhang, L., Yang, W., Du, Y., Zhou,H.,2022. Semanticallycontrastivelearningforlow-lightimage enhancement, in: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 1555–1563

  22. [22]

    Iterative prompt learning for unsupervised backlit image enhancement, in: ProceedingsoftheIEEE/CVFInternationalConferenceonComputer Vision, pp

    Liang, Z., Li, C., Zhou, S., Feng, R., Loy, C.C., 2023. Iterative prompt learning for unsupervised backlit image enhancement, in: ProceedingsoftheIEEE/CVFInternationalConferenceonComputer Vision, pp. 8094–8103

  23. [23]

    Liu, R., Ma, L., Zhang, J., Fan, X., Luo, Z., 2021. Retinex-inspired unrolling with cooperative prior architecture search for low-light imageenhancement,in:ProceedingsoftheIEEE/CVFconferenceon computer vision and pattern recognition, pp. 10561–10570

  24. [24]

    Bip-cenet: A bilateral prior– collaborative enhancement network with dual-domain priors for low- light image enhancement

    Lv, Y., Zhang, R., Hei, X., Song, X., Zhang, Z., Tu, H., Tan, Y., Xie, J., Zhang, Z., Zheng, X., et al., 2026. Bip-cenet: A bilateral prior– collaborative enhancement network with dual-domain priors for low- light image enhancement. Knowledge-Based Systems , 115967

  25. [25]

    Toward fast, flexible, and robust low-light image enhancement, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp

    Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z., 2022. Toward fast, flexible, and robust low-light image enhancement, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5637–5646

  26. [26]

    Dual- domain low-light image enhancement with hierarchical illumination guidance

    Qi, Y., Li, Q., Huang, Z., Feng, S., Wan, T., Zhang, Q., 2025. Dual- domain low-light image enhancement with hierarchical illumination guidance. Knowledge-Based Systems , 114835

  27. [27]

    Low- light image enhancement via self-degradation-aware and semantic- perceptual guidance networks

    Sedeeq, O., Anjuman, S.A., Sulaiman, S., Bartani, A., 2025. Low- light image enhancement via self-degradation-aware and semantic- perceptual guidance networks. Knowledge-Based Systems , 114571

  28. [28]

    A comprehensive review of vision-based roboticapplications:Currentstate,components,approaches,barriers, and potential solutions

    Shahria, M.T., Sunny, M.S.H., Zarif, M.I.I., Ghommam, J., Ahamed, S.I., Rahman, M.H., 2022. A comprehensive review of vision-based roboticapplications:Currentstate,components,approaches,barriers, and potential solutions. Robotics 11, 139

  29. [29]

    Shi, Y., Liu, D., Zhang, L., Tian, Y., Xia, X., Fu, X., 2024. Zero- ig: zero-shot illumination-guided joint denoising and adaptive en- hancement for low-light images, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3015– 3024

  30. [30]

    Ultralytics YOLO [eb/ol]

    Ultralytics, 2023a. Ultralytics YOLO [eb/ol]. URL:https://github. com/ultralytics/ultralytics. gitHub repository

  31. [31]

    YOLOv8 [eb/ol]

    Ultralytics, 2023b. YOLOv8 [eb/ol]. URL: https://docs. ultralytics.com/models/yolov8/

  32. [32]

    Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method, in: Proceedings of the AAAI Confer- ence on Artificial Intelligence, pp

    Wang, T., Zhang, K., Shen, T., Luo, W., Stenger, B., Lu, T., 2023. Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method, in: Proceedings of the AAAI Confer- ence on Artificial Intelligence, pp. 2654–2662

  33. [33]

    Image enhancement based on equal area dualistic sub-image histogram equalization method

    Wang, Y., Chen, Q., Zhang, B., 1999. Image enhancement based on equal area dualistic sub-image histogram equalization method. IEEE Trans. Consumer Electronics 45, 68–75

  34. [34]

    Deep retinex de- composition for low-light enhancement, in: British Machine Vision Conference (BMVC)

    Wei, C., Wang, W., Yang, W., Liu, J., 2018. Deep retinex de- composition for low-light enhancement, in: British Machine Vision Conference (BMVC)

  35. [35]

    Rethinking probabilisticlearningforcounterfactuallow-lightimageenhancement in robust engineering vision systems

    Wei, Z., Wang, Y., Debattista, K., Donzella, V., 2026. Rethinking probabilisticlearningforcounterfactuallow-lightimageenhancement in robust engineering vision systems. Knowledge-Based Systems , 115666

  36. [36]

    Uretinex-net: Retinex-based deep unfolding network for low-light imageenhancement,in:ProceedingsoftheIEEE/CVFconferenceon computer vision and pattern recognition, pp

    Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J., 2022. Uretinex-net: Retinex-based deep unfolding network for low-light imageenhancement,in:ProceedingsoftheIEEE/CVFconferenceon computer vision and pattern recognition, pp. 5901–5910

  37. [37]

    Zero-shot adaptive low light en- hancement with retinex decomposition and hybrid curve estimation, in:2023InternationalJointConferenceonNeuralNetworks(IJCNN), IEEE

    Xia, Y., Xu, F., Zheng, Q., 2023. Zero-shot adaptive low light en- hancement with retinex decomposition and hybrid curve estimation, in:2023InternationalJointConferenceonNeuralNetworks(IJCNN), IEEE. pp. 1–8

  38. [38]

    Learning to restore low- lightimagesviadecomposition-and-enhancement,in:Proceedingsof the IEEE/CVF conference on computer vision and pattern recogni- tion, pp

    Xu, K., Yang, X., Yin, B., Lau, R.W., 2020. Learning to restore low- lightimagesviadecomposition-and-enhancement,in:Proceedingsof the IEEE/CVF conference on computer vision and pattern recogni- tion, pp. 2281–2290

  39. [39]

    Snr-aware low-light image enhancement, in: CVPR

    Xu, X., Wang, R., Fu, C.W., Jia, J., 2022. Snr-aware low-light image enhancement, in: CVPR

  40. [40]

    Hvi:Anewcolorspaceforlow-lightimage enhancement, in: Proceedings of the computer vision and pattern recognition conference, pp

    Yan, Q., Feng, Y., Zhang, C., Pang, G., Shi, K., Wu, P., Dong, W., Sun,J.,Zhang,Y.,2025a. Hvi:Anewcolorspaceforlow-lightimage enhancement, in: Proceedings of the computer vision and pattern recognition conference, pp. 5678–5687

  41. [41]

    Yan, Q., Feng, Y., Zhang, C., Pang, G., Shi, K., Wu, P., Dong, W., Sun, J.Q., Zhang, Y., 2025b. HVI: A new color space for low-light imageenhancement,in:ProceedingsoftheIEEE/CVFConferenceon Guangrui Bai et al.:Preprint submitted to Elsevier Page 17 of 18 Knowledge-Based Systems Computer Vision and Pattern Recognition (CVPR)

  42. [42]

    Yang,G.Z.,Bellingham,J.,Dupont,P.E.,Fischer,P.,Floridi,L.,Full, R., Jacobstein, N., Kumar, V., McNutt, M., Merrifield, R., et al.,

  43. [43]

    Science robotics 3, eaar7650

    The grand challenges of science robotics. Science robotics 3, eaar7650

  44. [44]

    Implicit neu- ral representation for cooperative low-light image enhancement, in: ProceedingsoftheIEEE/CVFInternationalConferenceonComputer Vision (ICCV), pp

    Yang, S., Ding, M., Wu, Y., Li, Z., Zhang, J., 2023. Implicit neu- ral representation for cooperative low-light image enhancement, in: ProceedingsoftheIEEE/CVFInternationalConferenceonComputer Vision (ICCV), pp. 12918–12927

  45. [45]

    Sparse gradient regularized deep retinex network for robust low-light image enhancement

    Yang, W., Wang, W., Huang, H., Wang, S., Liu, J., 2021. Sparse gradient regularized deep retinex network for robust low-light image enhancement. IEEE Transactions on Image Processing 30, 2072–

  46. [46]

    doi:10.1109/TIP.2021.3050850

  47. [47]

    Advancing image understanding in poor visibility environments: A collective benchmarkstudy

    Yang, W., Yuan, Y., Ren, W., Liu, J., Scheirer, W.J., Wang, Z., Zhang, T., Zhong, Q., Xie, D., Pu, S., et al., 2020. Advancing image understanding in poor visibility environments: A collective benchmarkstudy. IEEETransactionsonImageProcessing29,5737– 5752

  48. [48]

    Diff-retinex: Rethinking low-light image enhancement with a generative diffusion model,in:ProceedingsoftheIEEE/CVFInternationalConferenceon Computer Vision, pp

    Yi, X., Xu, H., Zhang, H., Tang, L., Ma, J., 2023. Diff-retinex: Rethinking low-light image enhancement with a generative diffusion model,in:ProceedingsoftheIEEE/CVFInternationalConferenceon Computer Vision, pp. 12302–12311

  49. [49]

    Beyond brightening low-light images

    Zhang, Y., Guo, X., Ma, J., Liu, W., Zhang, J., 2021. Beyond brightening low-light images. International Journal of Computer Vision 129, 1013–1037

  50. [50]

    Zhang, Y., Teng, B., Yang, D., Chen, Z., Ma, H., Li, G., Ding, W.,

  51. [51]

    IEEE Transactions on Circuits and Systems for Video Technology 34, 5995–6008

    Learningasingleconvolutionallayermodelforlowlightimage enhancement. IEEE Transactions on Circuits and Systems for Video Technology 34, 5995–6008

  52. [52]

    Kindling the darkness: A practicallow-lightimageenhancer,in:Proceedingsofthe27thACM international conference on multimedia, pp

    Zhang, Y., Zhang, J., Guo, X., 2019. Kindling the darkness: A practicallow-lightimageenhancer,in:Proceedingsofthe27thACM international conference on multimedia, pp. 1632–1640

  53. [53]

    Zhang, Z., Zhao, S., Jin, X., Xu, M., Yang, Y., Yan, S., Wang, M.,

  54. [54]

    IEEE Transactions on Pattern Analysis and Machine Intelligence 47, 1073–1088

    Noise self-regression: A new learning paradigm to enhance low-light images without task-related data. IEEE Transactions on Pattern Analysis and Machine Intelligence 47, 1073–1088

  55. [55]

    Semantic-guided zero-shot learn- ing for low-light image/video enhancement, in: Proceedings of the IEEE/CVFWinterconferenceonapplicationsofcomputervision,pp

    Zheng, S., Gupta, G., 2022. Semantic-guided zero-shot learn- ing for low-light image/video enhancement, in: Proceedings of the IEEE/CVFWinterconferenceonapplicationsofcomputervision,pp. 581–590

  56. [56]

    Lednet: Joint low-light enhancement and deblurring in the dark, in: European conference on computer vision, Springer

    Zhou, S., Li, C., Change Loy, C., 2022. Lednet: Joint low-light enhancement and deblurring in the dark, in: European conference on computer vision, Springer. pp. 573–589

  57. [57]

    Contrastlimitedadaptivehistogramequaliza- tion,in:Heckbert,P.S.(Ed.),GraphicsGemsIV.AcademicPress,pp

    Zuiderveld,K.J.,1994. Contrastlimitedadaptivehistogramequaliza- tion,in:Heckbert,P.S.(Ed.),GraphicsGemsIV.AcademicPress,pp. 474–485. Guangrui Bai et al.:Preprint submitted to Elsevier Page 18 of 18