Recognition: 3 theorem links
· Lean TheoremRethinking Low-Light Image Enhancement: A Log-Domain Intensity--Chromaticity Decoupling Perspective
Pith reviewed 2026-05-08 18:38 UTC · model grok-4.3
The pith
Decoupling intensity from chromaticity in log space with added reconstruction rules improves low-light image enhancement and reduces noise.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
By moving to a log-domain representation that isolates intensity from chromaticity and then enforcing reconstruction constraints derived directly from that separation, the method suppresses abnormal amplification in individual color channels and chromatic noise that commonly appear in low-light enhancement.
What carries the argument
Log-domain intensity-chromaticity decoupling together with explicit reconstruction constraints derived from the decoupled form.
If this is right
- Quantitative scores such as PSNR and SSIM rise on LOLv2-Real, MIT-Adobe FiveK, and LSRW.
- Visual output shows fewer color shifts and less noise than methods that do not use the log-domain split.
- Downstream face detection on DarkFace improves because the enhanced images contain cleaner features.
- The same constraint logic can be inserted into other enhancement pipelines that currently suffer from channel imbalance.
Where Pith is reading between the lines
- The same log-domain split could be tested on video sequences to see whether temporal consistency improves.
- Medical or remote-sensing images taken under low light might benefit from the identical separation step.
- If the constraints prove stable, they could replace hand-tuned regularization terms in many existing networks.
Load-bearing premise
The separation of intensity and chromaticity in log space will consistently prevent channel amplification and color noise in real low-light photos without creating other visible problems.
What would settle it
Finding a collection of low-light images where the method produces stronger color fringing or new noise patterns than a standard enhancement baseline would show the claim does not hold.
Figures
read the original abstract
Explicit reconstruction constraints derived from the decoupled representation are further imposed to suppress abnormal channel amplification and chromatic noise. Experiments on LOLv2-Real, MIT-Adobe FiveK, and LSRW show that the proposed method achieves competitive or superior quantitative and visual performance, reaching 29.71 dB PSNR and 0.89 SSIM on LOLv2-Real. DarkFace experiments further indicate improved downstream face detection under low-light conditions. Code and pretrained models are available at: https://github.com/mubaisam/ICD.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes rethinking low-light image enhancement via a log-domain intensity-chromaticity decoupling perspective. From this decoupling, explicit reconstruction constraints are derived and imposed during enhancement to suppress abnormal channel amplification and chromatic noise. The method is evaluated on LOLv2-Real, MIT-Adobe FiveK, LSRW, and DarkFace, reporting competitive or superior quantitative results (e.g., 29.71 dB PSNR and 0.89 SSIM on LOLv2-Real) and qualitative improvements, plus gains in downstream face detection; code and models are released publicly.
Significance. If the decoupling and derived constraints prove to be the causal driver of artifact suppression, the work could offer a more interpretable and robust alternative to purely data-driven low-light enhancement methods, with benefits for real-world applications and downstream tasks. The public code release is a clear strength supporting reproducibility.
major comments (2)
- [§3] §3 (Method): The central claim rests on the log-domain decoupling yielding reconstruction constraints that reliably suppress chromatic noise and channel amplification without new artifacts, but the manuscript provides no ablation that isolates this component (e.g., training the same backbone with vs. without the derived constraints) to establish causality over network capacity or loss design.
- [§4] §4 (Experiments): Results on LOLv2-Real, MIT-Adobe FiveK, and LSRW report strong metrics, yet the benchmarks may share similar noise/lighting distributions; without cross-dataset controls or error analysis showing the decoupling generalizes beyond training statistics, the suppression guarantee remains tied to the evaluated distributions.
minor comments (2)
- [Abstract] The abstract states 'competitive or superior' performance; explicitly listing the top-3 baselines and their scores in the abstract or a summary table would improve clarity.
- Figure captions for qualitative results could include the specific failure modes (e.g., chromatic noise) being addressed in each example.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback and positive assessment of the work's potential. We address each major comment below and will incorporate revisions to strengthen the manuscript.
read point-by-point responses
-
Referee: [§3] §3 (Method): The central claim rests on the log-domain decoupling yielding reconstruction constraints that reliably suppress chromatic noise and channel amplification without new artifacts, but the manuscript provides no ablation that isolates this component (e.g., training the same backbone with vs. without the derived constraints) to establish causality over network capacity or loss design.
Authors: We agree that an explicit ablation isolating the derived reconstruction constraints is required to establish causality. In the revised manuscript we will add a controlled ablation using the identical backbone and loss design, comparing performance with and without the explicit constraints. This will quantify their specific contribution to suppressing abnormal channel amplification and chromatic noise, separate from network capacity effects. revision: yes
-
Referee: [§4] §4 (Experiments): Results on LOLv2-Real, MIT-Adobe FiveK, and LSRW report strong metrics, yet the benchmarks may share similar noise/lighting distributions; without cross-dataset controls or error analysis showing the decoupling generalizes beyond training statistics, the suppression guarantee remains tied to the evaluated distributions.
Authors: We acknowledge the value of stronger generalization evidence. Although the three benchmarks differ in capture conditions and noise profiles, we will add cross-dataset experiments (training on one and testing on the others) together with error analysis in the revision. These additions will better demonstrate that the intensity-chromaticity decoupling and constraints generalize beyond any single training distribution. revision: yes
Circularity Check
No significant circularity; derivation is self-contained.
full rationale
The paper proposes a log-domain intensity-chromaticity decoupling as a modeling perspective, derives explicit reconstruction constraints from that representation, and applies them within an enhancement network. These steps constitute an original modeling choice rather than a redefinition or fit that forces the target outputs. Performance claims rest on external benchmark results (LOLv2-Real, MIT-Adobe FiveK, LSRW, DarkFace) that are not statistically entailed by the decoupling definition itself. No self-citation chains, fitted parameters renamed as predictions, or ansatzes smuggled via prior work appear in the provided derivation outline. The method therefore remains non-circular by construction.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Log-domain representation enables effective decoupling of intensity and chromaticity for low-light enhancement
Lean theorems connected to this paper
-
IndisputableMonolith/CostJcost / CostAlphaLog (no parallel — paper's log is one-sided ratio, not symmetric J) echoes?
echoesECHOES: this paper passage has the same mathematical shape or conceptual pattern as the Recognition theorem, but is not a direct formal dependency.
C_c(x) = log((I_c(x)+ε)/(I_max(x)+ε))
-
IndisputableMonolith/Cost/FunctionalEquationJcost_unit0 / Jcost_pos_of_ne_one (RS has J(1)=0 with two-sided positivity; paper has only one-sided non-positivity) unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Non-positive upper bound C_c(x) ≤ 0 and zero-anchor C_{c*}(x)=0 for the max channel
-
IndisputableMonolith/Foundation (whole forcing chain)reality_from_one_distinction (RS has zero adjustable parameters; this is a trained CNN with millions of weights — different epistemic regime) unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
29.71 dB PSNR and 0.89 SSIM on LOLv2-Real ... 1.10M parameters
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Recov- ering intrinsic scene characteristics
Barrow, H., Tenenbaum, J., Hanson, A., Riseman, E., 1978. Recov- ering intrinsic scene characteristics. Comput. vis. syst 2, 2
1978
-
[2]
Learning photographicglobaltonaladjustmentwithadatabaseofinput/output image pairs, in: The Twenty-Fourth IEEE Conference on Computer Vision and Pattern Recognition
Bychkovsky, V., Paris, S., Chan, E., Durand, F., 2011a. Learning photographicglobaltonaladjustmentwithadatabaseofinput/output image pairs, in: The Twenty-Fourth IEEE Conference on Computer Vision and Pattern Recognition. Guangrui Bai et al.:Preprint submitted to Elsevier Page 16 of 18 Knowledge-Based Systems
-
[3]
Learning photographic global tonal adjustment with a database of input/output image pairs, in: CVPR 2011, IEEE
Bychkovsky, V., Paris, S., Chan, E., Durand, F., 2011b. Learning photographic global tonal adjustment with a database of input/output image pairs, in: CVPR 2011, IEEE. pp. 97–104
2011
-
[4]
Past, present, and future of simultaneouslocalizationandmapping:Towardtherobust-perception age
Cadena, C., Carlone, L., Carrillo, H., Latif, Y., Scaramuzza, D., Neira, J., Reid, I., Leonard, J.J., 2017. Past, present, and future of simultaneouslocalizationandmapping:Towardtherobust-perception age. IEEE Transactions on robotics 32, 1309–1332
2017
-
[5]
Retinexformer:One-stageretinex-basedtransformerforlow-lightim- age enhancement, in: Proceedings of the IEEE/CVF international conference on computer vision, pp
Cai, Y., Bian, H., Lin, J., Wang, H., Timofte, R., Zhang, Y., 2023. Retinexformer:One-stageretinex-basedtransformerforlow-lightim- age enhancement, in: Proceedings of the IEEE/CVF international conference on computer vision, pp. 12504–12513
2023
-
[6]
Learning to see in the dark,in:ProceedingsoftheIEEEconferenceoncomputervisionand pattern recognition, pp
Chen, C., Chen, Q., Xu, J., Koltun, V., 2018. Learning to see in the dark,in:ProceedingsoftheIEEEconferenceoncomputervisionand pattern recognition, pp. 3291–3300
2018
-
[7]
Fast context-based low-light image enhancement via neural implicit rep- resentations,in:EuropeanConferenceonComputerVision,Springer
Chobola, T., Liu, Y., Zhang, H., Schnabel, J.A., Peng, T., 2024. Fast context-based low-light image enhancement via neural implicit rep- resentations,in:EuropeanConferenceonComputerVision,Springer. pp. 413–430
2024
-
[8]
Cui, Z., Li, K., Gu, L., Su, S., Gao, P., Jiang, Z., Qiao, Y., Harada, T., 2022. You only need 90k parameters to adapt light: a light weight transformer for image enhancement and exposure correction, in: 33rd British Machine Vision Conference 2022, BMVC 2022, London, UK, November 21-24, 2022, BMVA Press. URL:https: //bmvc2022.mpi-inf.mpg.de/0238.pdf
2022
-
[9]
Practical poissonian-gaussian noise modeling and fitting for single-image raw- data
Foi,A.,Trimeche,M.,Katkovnik,V.,Egiazarian,K.,2008. Practical poissonian-gaussian noise modeling and fitting for single-image raw- data. IEEE transactions on image processing 17, 1737–1754
2008
-
[10]
Learning a simple low-light image enhancer from paired low-light instances, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp
Fu, Z., Yang, Y., Tu, X., Huang, Y., Ding, X., Ma, K.K., 2023. Learning a simple low-light image enhancer from paired low-light instances, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 22252–22261
2023
-
[11]
Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., Cong, R.,
-
[12]
1780–1789
Zero-reference deep curve estimation for low-light image en- hancement,in:ProceedingsoftheIEEE/CVFconferenceoncomputer vision and pattern recognition, pp. 1780–1789
-
[13]
Lime:Low-lightimageenhancement via illumination map estimation
Guo,X.,Li,Y.,Ling,H.,2016. Lime:Low-lightimageenhancement via illumination map estimation. IEEE Transactions on image pro- cessing 26, 982–993
2016
-
[14]
R2rnet: Low-light image enhancement via real-low to real-normal network.JournalofVisualCommunicationandImageRepresentation 90, 103712
Hai, J., Xuan, Z., Yang, R., Hao, Y., Zou, F., Lin, F., Han, S., 2023. R2rnet: Low-light image enhancement via real-low to real-normal network.JournalofVisualCommunicationandImageRepresentation 90, 103712
2023
-
[15]
Global structure-aware diffusion process for low-light image enhancement
Hou, J., Zhu, Z., Hou, J., Liu, H., Zeng, H., Yuan, H., 2023. Global structure-aware diffusion process for low-light image enhancement. Advances in Neural Information Processing Systems 36, 79734– 79747
2023
-
[16]
Lightendiffusion: Unsupervisedlow-lightimageenhancementwithlatent-retinexdiffu- sion models, in: European Conference on Computer Vision
Jiang, H., Luo, A., Liu, X., Han, S., Liu, S., 2024. Lightendiffusion: Unsupervisedlow-lightimageenhancementwithlatent-retinexdiffu- sion models, in: European Conference on Computer Vision
2024
-
[17]
Enlightengan: Deep light enhancement without paired supervision
Jiang, Y., Gong, X., Liu, D., Cheng, Y., Fang, C., Shen, X., Yang, J., Zhou, P., Wang, Z., 2021. Enlightengan: Deep light enhancement without paired supervision. IEEE transactions on image processing 30, 2340–2349
2021
-
[18]
A multi-scale retinex for bridging the gap between color images and the human observation of scenes
Jobson, D.J., ur Rahman, Z., Woodell, G.A., 1997. A multi-scale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Processing 6, 965–976
1997
-
[19]
Land,E.H.,McCann,J.J.,1971. Lightnessandretinextheory. Journal of the Optical Society of America 61, 1–11. doi:10.1364/JOSA.61. 000001
-
[20]
Learning to enhance low-light image via zero-reference deep curve estimation
Li, C., Guo, C., Loy, C.C., 2021. Learning to enhance low-light image via zero-reference deep curve estimation. IEEE transactions on pattern analysis and machine intelligence 44, 4225–4238
2021
-
[21]
Semanticallycontrastivelearningforlow-lightimage enhancement, in: Proceedings of the AAAI Conference on Artificial Intelligence, pp
Liang, D., Li, L., Wei, M., Yang, S., Zhang, L., Yang, W., Du, Y., Zhou,H.,2022. Semanticallycontrastivelearningforlow-lightimage enhancement, in: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 1555–1563
2022
-
[22]
Iterative prompt learning for unsupervised backlit image enhancement, in: ProceedingsoftheIEEE/CVFInternationalConferenceonComputer Vision, pp
Liang, Z., Li, C., Zhou, S., Feng, R., Loy, C.C., 2023. Iterative prompt learning for unsupervised backlit image enhancement, in: ProceedingsoftheIEEE/CVFInternationalConferenceonComputer Vision, pp. 8094–8103
2023
-
[23]
Liu, R., Ma, L., Zhang, J., Fan, X., Luo, Z., 2021. Retinex-inspired unrolling with cooperative prior architecture search for low-light imageenhancement,in:ProceedingsoftheIEEE/CVFconferenceon computer vision and pattern recognition, pp. 10561–10570
2021
-
[24]
Bip-cenet: A bilateral prior– collaborative enhancement network with dual-domain priors for low- light image enhancement
Lv, Y., Zhang, R., Hei, X., Song, X., Zhang, Z., Tu, H., Tan, Y., Xie, J., Zhang, Z., Zheng, X., et al., 2026. Bip-cenet: A bilateral prior– collaborative enhancement network with dual-domain priors for low- light image enhancement. Knowledge-Based Systems , 115967
2026
-
[25]
Toward fast, flexible, and robust low-light image enhancement, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp
Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z., 2022. Toward fast, flexible, and robust low-light image enhancement, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5637–5646
2022
-
[26]
Dual- domain low-light image enhancement with hierarchical illumination guidance
Qi, Y., Li, Q., Huang, Z., Feng, S., Wan, T., Zhang, Q., 2025. Dual- domain low-light image enhancement with hierarchical illumination guidance. Knowledge-Based Systems , 114835
2025
-
[27]
Low- light image enhancement via self-degradation-aware and semantic- perceptual guidance networks
Sedeeq, O., Anjuman, S.A., Sulaiman, S., Bartani, A., 2025. Low- light image enhancement via self-degradation-aware and semantic- perceptual guidance networks. Knowledge-Based Systems , 114571
2025
-
[28]
A comprehensive review of vision-based roboticapplications:Currentstate,components,approaches,barriers, and potential solutions
Shahria, M.T., Sunny, M.S.H., Zarif, M.I.I., Ghommam, J., Ahamed, S.I., Rahman, M.H., 2022. A comprehensive review of vision-based roboticapplications:Currentstate,components,approaches,barriers, and potential solutions. Robotics 11, 139
2022
-
[29]
Shi, Y., Liu, D., Zhang, L., Tian, Y., Xia, X., Fu, X., 2024. Zero- ig: zero-shot illumination-guided joint denoising and adaptive en- hancement for low-light images, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3015– 3024
2024
-
[30]
Ultralytics YOLO [eb/ol]
Ultralytics, 2023a. Ultralytics YOLO [eb/ol]. URL:https://github. com/ultralytics/ultralytics. gitHub repository
-
[31]
YOLOv8 [eb/ol]
Ultralytics, 2023b. YOLOv8 [eb/ol]. URL: https://docs. ultralytics.com/models/yolov8/
-
[32]
Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method, in: Proceedings of the AAAI Confer- ence on Artificial Intelligence, pp
Wang, T., Zhang, K., Shen, T., Luo, W., Stenger, B., Lu, T., 2023. Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method, in: Proceedings of the AAAI Confer- ence on Artificial Intelligence, pp. 2654–2662
2023
-
[33]
Image enhancement based on equal area dualistic sub-image histogram equalization method
Wang, Y., Chen, Q., Zhang, B., 1999. Image enhancement based on equal area dualistic sub-image histogram equalization method. IEEE Trans. Consumer Electronics 45, 68–75
1999
-
[34]
Deep retinex de- composition for low-light enhancement, in: British Machine Vision Conference (BMVC)
Wei, C., Wang, W., Yang, W., Liu, J., 2018. Deep retinex de- composition for low-light enhancement, in: British Machine Vision Conference (BMVC)
2018
-
[35]
Rethinking probabilisticlearningforcounterfactuallow-lightimageenhancement in robust engineering vision systems
Wei, Z., Wang, Y., Debattista, K., Donzella, V., 2026. Rethinking probabilisticlearningforcounterfactuallow-lightimageenhancement in robust engineering vision systems. Knowledge-Based Systems , 115666
2026
-
[36]
Uretinex-net: Retinex-based deep unfolding network for low-light imageenhancement,in:ProceedingsoftheIEEE/CVFconferenceon computer vision and pattern recognition, pp
Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J., 2022. Uretinex-net: Retinex-based deep unfolding network for low-light imageenhancement,in:ProceedingsoftheIEEE/CVFconferenceon computer vision and pattern recognition, pp. 5901–5910
2022
-
[37]
Zero-shot adaptive low light en- hancement with retinex decomposition and hybrid curve estimation, in:2023InternationalJointConferenceonNeuralNetworks(IJCNN), IEEE
Xia, Y., Xu, F., Zheng, Q., 2023. Zero-shot adaptive low light en- hancement with retinex decomposition and hybrid curve estimation, in:2023InternationalJointConferenceonNeuralNetworks(IJCNN), IEEE. pp. 1–8
2023
-
[38]
Learning to restore low- lightimagesviadecomposition-and-enhancement,in:Proceedingsof the IEEE/CVF conference on computer vision and pattern recogni- tion, pp
Xu, K., Yang, X., Yin, B., Lau, R.W., 2020. Learning to restore low- lightimagesviadecomposition-and-enhancement,in:Proceedingsof the IEEE/CVF conference on computer vision and pattern recogni- tion, pp. 2281–2290
2020
-
[39]
Snr-aware low-light image enhancement, in: CVPR
Xu, X., Wang, R., Fu, C.W., Jia, J., 2022. Snr-aware low-light image enhancement, in: CVPR
2022
-
[40]
Hvi:Anewcolorspaceforlow-lightimage enhancement, in: Proceedings of the computer vision and pattern recognition conference, pp
Yan, Q., Feng, Y., Zhang, C., Pang, G., Shi, K., Wu, P., Dong, W., Sun,J.,Zhang,Y.,2025a. Hvi:Anewcolorspaceforlow-lightimage enhancement, in: Proceedings of the computer vision and pattern recognition conference, pp. 5678–5687
-
[41]
Yan, Q., Feng, Y., Zhang, C., Pang, G., Shi, K., Wu, P., Dong, W., Sun, J.Q., Zhang, Y., 2025b. HVI: A new color space for low-light imageenhancement,in:ProceedingsoftheIEEE/CVFConferenceon Guangrui Bai et al.:Preprint submitted to Elsevier Page 17 of 18 Knowledge-Based Systems Computer Vision and Pattern Recognition (CVPR)
-
[42]
Yang,G.Z.,Bellingham,J.,Dupont,P.E.,Fischer,P.,Floridi,L.,Full, R., Jacobstein, N., Kumar, V., McNutt, M., Merrifield, R., et al.,
-
[43]
Science robotics 3, eaar7650
The grand challenges of science robotics. Science robotics 3, eaar7650
-
[44]
Implicit neu- ral representation for cooperative low-light image enhancement, in: ProceedingsoftheIEEE/CVFInternationalConferenceonComputer Vision (ICCV), pp
Yang, S., Ding, M., Wu, Y., Li, Z., Zhang, J., 2023. Implicit neu- ral representation for cooperative low-light image enhancement, in: ProceedingsoftheIEEE/CVFInternationalConferenceonComputer Vision (ICCV), pp. 12918–12927
2023
-
[45]
Sparse gradient regularized deep retinex network for robust low-light image enhancement
Yang, W., Wang, W., Huang, H., Wang, S., Liu, J., 2021. Sparse gradient regularized deep retinex network for robust low-light image enhancement. IEEE Transactions on Image Processing 30, 2072–
2021
-
[46]
doi:10.1109/TIP.2021.3050850
-
[47]
Advancing image understanding in poor visibility environments: A collective benchmarkstudy
Yang, W., Yuan, Y., Ren, W., Liu, J., Scheirer, W.J., Wang, Z., Zhang, T., Zhong, Q., Xie, D., Pu, S., et al., 2020. Advancing image understanding in poor visibility environments: A collective benchmarkstudy. IEEETransactionsonImageProcessing29,5737– 5752
2020
-
[48]
Diff-retinex: Rethinking low-light image enhancement with a generative diffusion model,in:ProceedingsoftheIEEE/CVFInternationalConferenceon Computer Vision, pp
Yi, X., Xu, H., Zhang, H., Tang, L., Ma, J., 2023. Diff-retinex: Rethinking low-light image enhancement with a generative diffusion model,in:ProceedingsoftheIEEE/CVFInternationalConferenceon Computer Vision, pp. 12302–12311
2023
-
[49]
Beyond brightening low-light images
Zhang, Y., Guo, X., Ma, J., Liu, W., Zhang, J., 2021. Beyond brightening low-light images. International Journal of Computer Vision 129, 1013–1037
2021
-
[50]
Zhang, Y., Teng, B., Yang, D., Chen, Z., Ma, H., Li, G., Ding, W.,
-
[51]
IEEE Transactions on Circuits and Systems for Video Technology 34, 5995–6008
Learningasingleconvolutionallayermodelforlowlightimage enhancement. IEEE Transactions on Circuits and Systems for Video Technology 34, 5995–6008
-
[52]
Kindling the darkness: A practicallow-lightimageenhancer,in:Proceedingsofthe27thACM international conference on multimedia, pp
Zhang, Y., Zhang, J., Guo, X., 2019. Kindling the darkness: A practicallow-lightimageenhancer,in:Proceedingsofthe27thACM international conference on multimedia, pp. 1632–1640
2019
-
[53]
Zhang, Z., Zhao, S., Jin, X., Xu, M., Yang, Y., Yan, S., Wang, M.,
-
[54]
IEEE Transactions on Pattern Analysis and Machine Intelligence 47, 1073–1088
Noise self-regression: A new learning paradigm to enhance low-light images without task-related data. IEEE Transactions on Pattern Analysis and Machine Intelligence 47, 1073–1088
-
[55]
Semantic-guided zero-shot learn- ing for low-light image/video enhancement, in: Proceedings of the IEEE/CVFWinterconferenceonapplicationsofcomputervision,pp
Zheng, S., Gupta, G., 2022. Semantic-guided zero-shot learn- ing for low-light image/video enhancement, in: Proceedings of the IEEE/CVFWinterconferenceonapplicationsofcomputervision,pp. 581–590
2022
-
[56]
Lednet: Joint low-light enhancement and deblurring in the dark, in: European conference on computer vision, Springer
Zhou, S., Li, C., Change Loy, C., 2022. Lednet: Joint low-light enhancement and deblurring in the dark, in: European conference on computer vision, Springer. pp. 573–589
2022
-
[57]
Contrastlimitedadaptivehistogramequaliza- tion,in:Heckbert,P.S.(Ed.),GraphicsGemsIV.AcademicPress,pp
Zuiderveld,K.J.,1994. Contrastlimitedadaptivehistogramequaliza- tion,in:Heckbert,P.S.(Ed.),GraphicsGemsIV.AcademicPress,pp. 474–485. Guangrui Bai et al.:Preprint submitted to Elsevier Page 18 of 18
1994
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.