Recognition: unknown
IA-CLAHE: Image-Adaptive Clip Limit Estimation for CLAHE
Pith reviewed 2026-05-10 09:25 UTC · model grok-4.3
The pith
A lightweight network learns to set per-tile clip limits for CLAHE by targeting uniform local histograms.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
IA-CLAHE trains a lightweight clip limits estimator with a differentiable extension of CLAHE so that end-to-end optimization drives every local histogram toward a uniform distribution. The estimator reads the input image and outputs a tile-wise clip limit map that replaces the conventional fixed parameter. Because the training objective is invariant to specific image domains or tasks, the resulting method generalizes in zero-shot fashion and requires neither ground-truth clip values nor task-specific training sets. This yields simultaneous gains in downstream recognition performance and in perceptual image quality.
What carries the argument
lightweight clip limits estimator trained end-to-end via differentiable CLAHE that maps local histograms toward uniformity
If this is right
- Recognition accuracy rises on standard vision tasks without any retraining of the downstream model.
- Images appear less over-enhanced and more natural to human observers under the same processing pipeline.
- The method applies directly to new image domains or tasks because no task-specific data or ground-truth clip limits are needed.
- Over-enhancement artifacts that arise from a single global clip limit are reduced tile by tile.
- The same trained estimator can be dropped into existing CLAHE-based industrial pipelines with no extra supervision.
Where Pith is reading between the lines
- The uniform-distribution target could be replaced by other learned or task-aware targets if downstream performance plateaus.
- The estimator might be inserted as a lightweight preprocessing layer inside larger end-to-end vision networks.
- Similar adaptive logic could be tested on video sequences where clip limits change smoothly across frames.
- Manual tuning of CLAHE parameters in practice might become unnecessary once the estimator is fixed.
Load-bearing premise
Pushing every local histogram toward a uniform distribution through learned clip limits produces values that are simultaneously optimal for machine recognition and human perception.
What would settle it
Apply IA-CLAHE and fixed CLAHE to the same low-contrast benchmark dataset and measure whether recognition accuracy and perceptual quality metrics are statistically indistinguishable or lower for the adaptive version.
Figures
read the original abstract
This paper proposes image-adaptive contrast limited adaptive histogram equalization (IA-CLAHE). Conventional CLAHE is widely used to boost the performance of various computer vision tasks and to improve visual quality for human perception in practical industrial applications. CLAHE applies contrast limited histogram equalization to each local region to enhance local contrast. However, CLAHE often leads to over-enhancement, because the contrast-limiting parameter clip limit is fixed regardless of the histogram distribution of each local region. Our IA-CLAHE addresses this limitation by adaptively estimating tile-wise clip limits from the input image. To achieve this, we train a lightweight clip limits estimator with a differentiable extension of CLAHE, enabling end-to-end optimization. Unlike prior learning-based CLAHE methods, IA-CLAHE does not require pre-searched ground-truth clip limits or task-specific datasets, because it learns to map input image histograms toward a domain-invariant uniform distribution, enabling zero-shot generalization across diverse conditions. Experimental results show that IA-CLAHE consistently improves recognition performance, while simultaneously enhancing visual quality for human perception, without requiring any task-specific training data.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes IA-CLAHE, an image-adaptive variant of CLAHE in which a lightweight neural network estimates per-tile clip limits. The network is trained end-to-end with a differentiable CLAHE operator whose loss drives each local histogram toward a uniform target distribution. This construction is claimed to eliminate the need for task-specific training data or pre-searched ground-truth clip values, enabling zero-shot generalization while simultaneously improving both downstream recognition accuracy and human-perceived visual quality.
Significance. If the empirical performance claims are substantiated, the approach would supply a general-purpose, unsupervised contrast-enhancement module that does not require retraining or labeled data for each new vision task. The differentiable CLAHE extension that permits gradient-based optimization of the clip-limit estimator is a concrete technical contribution that could be reused in other histogram-based pipelines.
major comments (3)
- [Abstract] Abstract: the statement that 'Experimental results show that IA-CLAHE consistently improves recognition performance' is unsupported by any quantitative metrics, comparison tables, baselines, or error bars anywhere in the manuscript. This assertion is load-bearing for the central claim that the uniformity-driven estimator benefits machine recognition.
- [Method] Method (clip-limit estimator training): the sole training objective maps tile histograms to a uniform distribution; no recognition loss, feature-separability term, or indirect supervision from labeled data is present. Consequently the abstract's claim of recognition gains rests on an unverified correlation rather than a designed property of the method.
- [Experiments] Experiments: no ablation studies, cross-dataset zero-shot evaluations, or comparisons against fixed-clip CLAHE and prior learning-based CLAHE variants are reported. Without these controls the dual benefit for recognition and human perception cannot be assessed.
minor comments (2)
- The description of the lightweight estimator architecture would benefit from an explicit diagram or layer-by-layer specification to clarify input (histogram) and output (clip-limit map) dimensions.
- Notation for the differentiable CLAHE operator (e.g., the exact form of the clipping and redistribution steps) should be formalized with equations to facilitate reproducibility.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our submission. We address each of the major comments point by point below, indicating the revisions we plan to make to strengthen the manuscript.
read point-by-point responses
-
Referee: [Abstract] Abstract: the statement that 'Experimental results show that IA-CLAHE consistently improves recognition performance' is unsupported by any quantitative metrics, comparison tables, baselines, or error bars anywhere in the manuscript. This assertion is load-bearing for the central claim that the uniformity-driven estimator benefits machine recognition.
Authors: We agree that the current manuscript does not include the necessary quantitative support for the recognition performance claim in the abstract. In the revised version, we will add detailed experimental results, including quantitative metrics, comparison tables with baselines, and error bars from repeated trials, to substantiate the improvements in recognition accuracy. revision: yes
-
Referee: [Method] Method (clip-limit estimator training): the sole training objective maps tile histograms to a uniform distribution; no recognition loss, feature-separability term, or indirect supervision from labeled data is present. Consequently the abstract's claim of recognition gains rests on an unverified correlation rather than a designed property of the method.
Authors: The training procedure indeed optimizes solely for histogram uniformity using the differentiable CLAHE extension, without incorporating any recognition-specific loss or labeled data supervision. This choice is deliberate to facilitate zero-shot application across different tasks. The expected recognition benefits arise from the improved local contrast without over-enhancement. We will revise the manuscript to better distinguish between the training objective and the empirical outcomes, and support the claims with the added experimental evidence. revision: partial
-
Referee: [Experiments] Experiments: no ablation studies, cross-dataset zero-shot evaluations, or comparisons against fixed-clip CLAHE and prior learning-based CLAHE variants are reported. Without these controls the dual benefit for recognition and human perception cannot be assessed.
Authors: We concur that the experimental section requires expansion to include the suggested controls. The revised manuscript will incorporate ablation studies on the adaptive estimation component, cross-dataset zero-shot evaluations to demonstrate generalization, and direct comparisons with fixed-clip CLAHE as well as other learning-based CLAHE approaches. This will enable a proper evaluation of the benefits for both machine recognition and human visual quality. revision: yes
Circularity Check
No circularity: uniform target is external standard, performance claims are empirical
full rationale
The derivation trains a lightweight estimator via differentiable CLAHE to map tile histograms to a uniform distribution, which is the externally motivated conventional target of standard CLAHE rather than a quantity defined by or fitted from the network outputs themselves. Clip limits are generated as network predictions optimized against this fixed external ideal; no recognition or perception loss is used in training, and reported gains are evaluated post-hoc on separate benchmarks. No self-citations, uniqueness theorems, or self-definitional reductions appear in the abstract or description. The chain remains self-contained against external benchmarks.
Axiom & Free-Parameter Ledger
free parameters (1)
- clip-limit estimator network weights
axioms (1)
- domain assumption A uniform histogram distribution is the optimal target for local contrast enhancement independent of downstream task
Reference graph
Works this paper leans on
-
[1]
Derpanis, Bj ¨orn Ommer, and Michael S
Mahmoud Afifi, Konstantinos G. Derpanis, Bj ¨orn Ommer, and Michael S. Brown. Learning multi-scale photo exposure correction. InCVPR, 2021. 5, 11
2021
-
[2]
The OpenCV Library.Dr
Gary Bradski. The OpenCV Library.Dr. Dobb’s Journal of Software Tools, 2000. 5
2000
-
[3]
Learning photographic global tonal adjustment with a database of input / output image pairs
Vladimir Bychkovsky, Sylvain Paris, Eric Chan, and Fredo Durand. Learning photographic global tonal adjustment with a database of input / output image pairs. InCVPR, pages 97–104, 2011. 5
2011
-
[4]
Machine learning hyper- parameter selection for contrast limited adaptive histogram equalization.EURASIP J
Gabriel Fillipe Centini Campos, Saulo Martiello Mastelini, Gabriel Jonas Aguiar, Rafael Gomes Mantovani, Leonimer Fl´avio de Melo, and Sylvio Barbon. Machine learning hyper- parameter selection for contrast limited adaptive histogram equalization.EURASIP J. Image Video Process., 2019(1):59,
2019
-
[5]
Automatic contrast-limited adaptive his- togram equalization with dual gamma correction.IEEE Ac- cess, 6:11782–11792, 2018
Yakun Chang, Cheolkon Jung, Peng Ke, Hyoseob Song, and Jungmee Hwang. Automatic contrast-limited adaptive his- togram equalization with dual gamma correction.IEEE Ac- cess, 6:11782–11792, 2018. 3
2018
-
[6]
XGBoost: A scalable tree boosting system
Tianqi Chen and Carlos Guestrin. XGBoost: A scalable tree boosting system. InKDD, page 785–794, 2016. 3
2016
-
[7]
Adap- tive clip limit tile size histogram equalization for non- homogenized intensity images.IEEE Access, 9:164466– 164492, 2021
Ali Fawzi, Anusha Achuthan, and Bahari Belaton. Adap- tive clip limit tile size histogram equalization for non- homogenized intensity images.IEEE Access, 9:164466– 164492, 2021. 2
2021
-
[8]
Gonzalez
Rafael C. Gonzalez. Intensity transformations and spatial fil- tering. InDigital Image Processing, chapter 3, pages 119–
-
[9]
2, 5, 6, 7
Pearson Deutschland, 4th edition, 2022. 2, 5, 6, 7
2022
-
[10]
Zero-reference deep curve estimation for low-light image enhancement
Chunle Guo, Chongyi Li, Jichang Guo, Chen Change Loy, Junhui Hou, Sam Kwong, and Runmin Cong. Zero-reference deep curve estimation for low-light image enhancement. In CVPR, pages 1780–1789, 2020. 1, 2
2020
-
[11]
BO-CLAHE enhancing neonatal chest X-ray image quality for improved lesion classification.Sci
Jiwon Han, Byungmin Choi, Jae Young Kim, and Yeonjoon Lee. BO-CLAHE enhancing neonatal chest X-ray image quality for improved lesion classification.Sci. Rep., 15(1): 4931, 2025. 2, 3, 6
2025
-
[12]
Single image haze removal using dark channel prior.IEEE TPAMI, 33(12): 2341–2353, 2010
Kaiming He, Jian Sun, and Xiaoou Tang. Single image haze removal using dark channel prior.IEEE TPAMI, 33(12): 2341–2353, 2010. 1
2010
-
[13]
Deep residual learning for image recognition
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. InCVPR, pages 770–778, 2016. 5
2016
-
[14]
Searching for Mo- bileNetV3
Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. Searching for Mo- bileNetV3. InICCV, pages 1314–1324, 2019. 4
2019
-
[15]
Studio encoding parameters of digital television for standard 4:3 and wide-screen 16:9 aspect ratios
ITU-R. Studio encoding parameters of digital television for standard 4:3 and wide-screen 16:9 aspect ratios. Recom- mendation ITU-R BT.601-7, International Telecommunica- tion Union, 2011. 5
2011
-
[16]
EnlightenGAN: Deep light enhancement without paired supervision.IEEE TIP, 30:2340–2349, 2021
Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, and Zhangyang Wang. EnlightenGAN: Deep light enhancement without paired supervision.IEEE TIP, 30:2340–2349, 2021. 1
2021
-
[17]
Ultralytics YOLOv3.https: //github.com/ultralytics/yolov3, 2018
Glenn Jocher and Ultralytics. Ultralytics YOLOv3.https: //github.com/ultralytics/yolov3, 2018. Ac- cessed: 2026-04-10. 5
2018
-
[18]
Perceptual losses for real-time style transfer and super-resolution
Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, pages 694–711, 2016. 11
2016
-
[19]
Semary, and Nagwa Aboelenien
Omar Kamel, Khaled Amin, Noura A. Semary, and Nagwa Aboelenien. An automated contrast enhancement technique for remote sensed images.Int. J. Comput. Inf., 2023. 2, 3, 5, 6, 7
2023
-
[20]
DAWN: Vehicle detection in adverse weather nature dataset,
Mourad A Kenk and Mahmoud Hassaballah. DAWN: vehicle detection in adverse weather nature dataset. arXiv:2008.05402, 2020. 5
-
[21]
Adam: A Method for Stochastic Optimization
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization.arXiv:1412.6980, 2014. 5
work page internal anchor Pith review Pith/arXiv arXiv 2014
-
[22]
van Gemert
Attila Lengyel, Sourav Garg, Michael Milford, and Jan C. van Gemert. Zero-shot day-night domain adaptation with a physics prior. InICCV, pages 4379–4389, 2021. 5
2021
-
[23]
Deep learning-optimized CLAHE for contrast and color enhancement in suzhou gar- den images.Int
Chuanyuan Li and Ziyun Jiao. Deep learning-optimized CLAHE for contrast and color enhancement in suzhou gar- den images.Int. J. Adv. Comput. Sci. Appl., 15(12), 2024. 2, 3
2024
-
[24]
Learning to enhance low-light image via zero-reference deep curve es- timation.IEEE TPAMI, 44(8):4225–4238, 2022
Chongyi Li, Chunle Guo, and Chen Change Loy. Learning to enhance low-light image via zero-reference deep curve es- timation.IEEE TPAMI, 44(8):4225–4238, 2022. 1, 2, 5, 6, 7
2022
-
[25]
Watching it in dark: A target-aware representation learning framework for high- level vision tasks in low illumination
Yunan Li, Yihao Zhang, Shoude Li, Long Tian, Dou Quan, Chaoneng Li, and Qiguang Miao. Watching it in dark: A target-aware representation learning framework for high- level vision tasks in low illumination. InECCV, pages 37–53,
-
[26]
Real-time expo- sure correction via collaborative transformations and adap- tive sampling
Ziwen Li, Feng Zhang, Meng Cao, Jinpu Zhang, Yuanjie Shao, Yuehuan Wang, and Nong Sang. Real-time expo- sure correction via collaborative transformations and adap- tive sampling. InCVPR, pages 2984–2994, 2024. 2, 5, 11
2024
-
[27]
Lawrence Zitnick
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C. Lawrence Zitnick. Microsoft COCO: Common objects in context. In ECCV, pages 740–755, 2014. 5
2014
-
[28]
Adaptive CLAHE image enhancement using imaging environment self-perception
Haoting Liu, Beibei Yan, Ming Lv, Junlong Wang, Xuefeng Wang, and Wei Wang. Adaptive CLAHE image enhancement using imaging environment self-perception. InInt. Conf. Man–Mach.–Environ. Syst. Eng., pages 343–350, 2017. 2, 3
2017
-
[29]
Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement
Risheng Liu, Long Ma, Jiaao Zhang, Xin Fan, and Zhongx- uan Luo. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In CVPR, pages 10561–10570, 2021. 1
2021
-
[30]
Image-adaptive YOLO for object de- tection in adverse weather conditions
Wenyu Liu, Gaofeng Ren, Runsheng Yu, Shi Guo, Jianke Zhu, and Lei Zhang. Image-adaptive YOLO for object de- tection in adverse weather conditions. InAAAI, pages 1792– 1800, 2022. 1, 2
2022
-
[31]
Getting to know low- light images with the exclusively dark dataset.Comput
Yuen Peng Loh and Chee Seng Chan. Getting to know low- light images with the exclusively dark dataset.Comput. Vis. Image Underst., 178:30–42, 2019. 5
2019
-
[32]
Vijaya Madhavi V . and P. Lalitha Surya Kumari. A qualitative approach for enhancing fundus images with novel CLAHE methods.Eng. Technol. Appl. Sci. Res., 15(1):20102–20107,
-
[33]
Iterated adaptive entropy-clip limit histogram equalization for poor contrast images.IEEE Access, 8:144218–144245, 2020
Samer Hameed Majeed and Nor Ashidi Mat Isa. Iterated adaptive entropy-clip limit histogram equalization for poor contrast images.IEEE Access, 8:144218–144245, 2020. 6
2020
-
[34]
Con- trast limited adaptive local histogram equalization method for poor contrast image enhancement.IEEE Access, 13:62600– 62632, 2025
Ibrahim Majid Mohammed and Nor Ashidi Mat Isa. Con- trast limited adaptive local histogram equalization method for poor contrast image enhancement.IEEE Access, 13:62600– 62632, 2025. 6 9
2025
-
[35]
Anis Farihan Mat Raffei, Hishammuddin Asmuni, Rohayanti Hassan, and Razib M. Othman. A low lighting or contrast ra- tio visible iris recognition using iso-contrast limited adaptive histogram equalization.Knowl.-Based Syst., 74:40–48, 2015. 2
2015
-
[36]
A novel method of determining parameters of CLAHE based on image entropy.Int
Byong Seok Min, Dong Kyun Lim, Seung Jong Kim, and Joo Heung Lee. A novel method of determining parameters of CLAHE based on image entropy.Int. J. of Software Engi- neering and Its Applications, 7(5):113–120, 2013. 2, 3, 5
2013
-
[37]
No-reference image quality assessment in the spatial domain.IEEE TIP, 21(12):4695–4708, 2012
Anish Mittal, Anush Krishna Moorthy, and Alan Conrad Bovik. No-reference image quality assessment in the spatial domain.IEEE TIP, 21(12):4695–4708, 2012. 5, 11
2012
-
[38]
completely blind
Anish Mittal, Rajiv Soundararajan, and Alan C. Bovik. Mak- ing a “completely blind” image quality analyzer.IEEE Signal Processing Letters, 20(3):209–212, 2013. 5, 11
2013
-
[39]
Mor ´e, Marcos A
Luis G. Mor ´e, Marcos A. Brizuela, Horacio Legal Ayala, Diego P. Pinto-Roa, and Jos ´e Luis Vazquez Noguera. Pa- rameter tuning of CLAHE based on multi-objective optimiza- tion to achieve different contrast levels in medical images. In ICIP, pages 4644–4648, 2015. 2, 3
2015
-
[40]
A multimodal approach with firefly based CLAHE and multiscale fusion for enhancing underwater images.Sci
Venkata Lalitha Narla, Gulivindala Suresh, Chana- mallu Srinivasa Rao, Mohammed Al Awadh, and Nasim Hasan. A multimodal approach with firefly based CLAHE and multiscale fusion for enhancing underwater images.Sci. Rep., 14(1):27588, 2024. 2, 3
2024
-
[41]
ERUP-YOLO: Enhancing object detection robustness for ad- verse weather condition by unified image-adaptive process- ing
Yuka Ogino, Yuho Shoji, Takahiro Toizumi, and Atsushi Ito. ERUP-YOLO: Enhancing object detection robustness for ad- verse weather condition by unified image-adaptive process- ing. InWACV, pages 8597–8605, 2025. 1, 2
2025
-
[42]
Rethinking image histogram matching for image classification
Rikuto Otsuka, Yuho Shoji, Yuka Ogino, Takahiro Toizumi, and Atsushi Ito. Rethinking image histogram matching for image classification. InICIP, pages 1235–1240, 2025. 1
2025
-
[43]
Adaptive histogram equalization in diabetic retinopathy detection
Daniela Angela Parletta and Giovanna Purgato. Adaptive histogram equalization in diabetic retinopathy detection. In AAAIW, 2023. 1, 5
2023
-
[44]
Pizer, R
Stephen M. Pizer, R. Eugene Johnston, James P. Ericksen, Bonnie C. Yankaskas, and Keith E. Muller. Contrast-limited adaptive histogram equalization: speed and effectiveness. In Conf. Vis. Biomed. Comput., page 2, 1990. 2
1990
-
[45]
YOLOv3: An Incremental Improvement
Joseph Redmon and Ali Farhadi. YOLOv3: An incremental improvement.arXiv:1804.02767, 2018. 5
work page internal anchor Pith review arXiv 2018
-
[46]
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014. 3, 11
work page internal anchor Pith review Pith/arXiv arXiv 2014
-
[47]
Restoring images in adverse weather conditions via histogram transformer
Shangquan Sun, Wenqi Ren, Xinwei Gao, Rui Wang, and Xi- aochun Cao. Restoring images in adverse weather conditions via histogram transformer. InECCV, pages 111–129, 2024. 1, 5, 6, 11
2024
-
[48]
Haoyuan Wang, Ke Xu, and Rynson W. H. Lau. Local color distributions prior for image enhancement. InECCV, pages 343–359, 2022. 5
2022
-
[49]
Enhancing CLAHE with interval-valued fermatean fuzziness for robust low-light image enhancement
Hongpeng Wang, Duanfa Wang, Zhiqin Wang, and Cheng- long Li. Enhancing CLAHE with interval-valued fermatean fuzziness for robust low-light image enhancement. InIEEE Int. Conf. Data Sci. Cybersp., pages 345–353, 2024. 2
2024
-
[50]
Zero-reference low-light enhancement via physical quadru- ple priors
Wenjing Wang, Huan Yang, Jianlong Fu, and Jiaying Liu. Zero-reference low-light enhancement via physical quadru- ple priors. InCVPR, pages 26057–26066, 2024. 5, 6
2024
-
[51]
Image quality assessment: from error visibility to structural similarity.IEEE TIP, 13(4):600–612, 2004
Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity.IEEE TIP, 13(4):600–612, 2004. 5, 11
2004
-
[52]
Toward raw object detection: A new benchmark and a new model
Ruikang Xu, Chang Chen, Jingyang Peng, Cheng Li, Yibin Huang, Fenglong Song, Youliang Yan, and Zhiwei Xiong. Toward raw object detection: A new benchmark and a new model. InCVPR, pages 13384–13393, 2023. 1, 2
2023
-
[53]
AdaInt: Learning adaptive intervals for 3D lookup tables on real-time image enhancement
Canqian Yang, Meiguang Jin, Xu Jia, Yi Xu, and Ying Chen. AdaInt: Learning adaptive intervals for 3D lookup tables on real-time image enhancement. InCVPR, pages 17501– 17510, 2022. 2, 4
2022
-
[54]
Learning image-adaptive 3D lookup tables for high perfor- mance photo enhancement in real-time.IEEE TPAMI, 44(4): 2058–2073, 2022
Hui Zeng, Jianrui Cai, Lida Li, Zisheng Cao, and Lei Zhang. Learning image-adaptive 3D lookup tables for high perfor- mance photo enhancement in real-time.IEEE TPAMI, 44(4): 2058–2073, 2022. 1, 2, 4, 5, 6, 7
2058
-
[55]
CLUT-Net: Learning adaptively compressed representations of 3DLUTs for lightweight image enhancement
Fengyi Zhang, Hui Zeng, Tianjun Zhang, and Lin Zhang. CLUT-Net: Learning adaptively compressed representations of 3DLUTs for lightweight image enhancement. InACM MM, pages 6493–6501, 2022. 2
2022
-
[56]
Contrast limited adaptive histogram equal- ization
Karel Zuiderveld. Contrast limited adaptive histogram equal- ization. InGraphics Gems IV, pages 474–485. Academic Press Professional, Inc., 1994. 2, 3, 5, 6, 7 10 IA-CLAHE: Image-Adaptive Clip Limit Estimation for CLAHE Supplementary Material (a) Input 2 4 6 2 4 6 2 4 6 (b) Ground-truth PSNR↑ SSIM↑ BRISQUE↓ NIQE↓ 18.34 0.82 1.11 3.72 (c) L1 loss only PSNR...
1994
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.