Recognition: 2 theorem links
NTIRE 2026 Challenge on Efficient Low Light Image Enhancement: Methods and Results
Pith reviewed 2026-05-08 19:30 UTC · model grok-4.3
The pith
The NTIRE 2026 challenge shows lightweight networks can improve low-light image quality while meeting strict mobile resource limits.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper states that the 27 valid team submissions, of which 17 supplied detailed factsheets, demonstrate measurable advances in the trade-off between enhancement quality and computational efficiency for low-light images on mobile hardware, as measured by the challenge benchmarks.
What carries the argument
The central mechanism is the challenge evaluation protocol that scores each submitted network on both perceptual image quality and explicit efficiency measures such as FLOPs, parameter count, and measured runtime on target mobile platforms.
If this is right
- Top methods combine architectural pruning and lightweight convolutions to cut computation while preserving detail recovery in dark regions.
- The challenge supplies a public benchmark that future efficient enhancement work can use for direct comparison.
- Practical mobile pipelines can now incorporate these networks without exceeding typical power and latency budgets.
- Subsequent challenges can tighten the efficiency thresholds to drive further reductions in model size.
- The results indicate that quality gains remain possible even after aggressive efficiency constraints are applied.
Where Pith is reading between the lines
- Future work could test whether the same networks maintain their advantage when the input comes from actual phone sensors rather than the challenge dataset.
- The efficiency numbers may guide hardware designers in deciding how much dedicated image-processing silicon is still required on mobile chips.
- If the top methods generalize to video, they could support real-time low-light video on phones without frame drops.
Load-bearing premise
The test images and scoring rules used in the challenge accurately reflect the conditions and constraints that matter for real mobile cameras.
What would settle it
An independent test on actual mobile phones that finds the top-ranked challenge methods no longer lead in either quality or speed would show that the reported progress does not hold outside the challenge setting.
Figures
read the original abstract
This paper presents a comprehensive review of the NITRE 2026 Efficient Low Light Image Enhancement (E-LLIE) Challenge, highlighting the proposed solutions and final outcomes. This challenge focuses on mobile image enhancement under low-light conditions, aiming to design lightweight networks that improve enhancement quality while ensuring practical deployability under limited computational resources. A total of 207 participants registered, 27 teams submitted valid entries, and 17 teams ultimately provided valid factsheet. Based on these submissions, this paper provides a systematic evaluation of recent methods for E-LLIE, offering a comprehensive overview of state-of-the-art progress and demonstrating significant improvements in both performance and efficiency.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. This paper presents the NTIRE 2026 Challenge on Efficient Low Light Image Enhancement (E-LLIE), reporting participation from 207 registrants with 27 valid submissions and 17 factsheets. It offers a systematic evaluation of the submitted methods for lightweight low-light image enhancement suitable for mobile devices and claims significant improvements in both performance and efficiency.
Significance. If the claims hold, the paper provides a useful benchmark and overview of recent advances in efficient low-light enhancement, which is relevant for practical mobile applications. It aggregates insights from multiple teams, potentially accelerating progress in the field by highlighting effective lightweight architectures.
major comments (2)
- [Abstract] The abstract claims 'significant improvements in both performance and efficiency' without detailing the metrics used, the baselines compared against, or the test dataset characteristics. This omission is load-bearing for the paper's purpose as it leaves the central claim of state-of-the-art progress unsupported at the summary level.
- [Challenge Results] The efficiency claims are based on self-reported factsheet data (parameter count, FLOPs, simulated latency) without evidence of independent verification on target mobile hardware. This is a critical weakness for the assertion of 'practical deployability under limited computational resources', as real-world factors like memory bandwidth and quantization may not be accounted for.
minor comments (1)
- The manuscript would benefit from including a summary table of the top methods with their reported metrics for easier comparison.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our manuscript. We address each major comment below with clarifications and indicate the revisions we will make to improve the paper.
read point-by-point responses
-
Referee: [Abstract] The abstract claims 'significant improvements in both performance and efficiency' without detailing the metrics used, the baselines compared against, or the test dataset characteristics. This omission is load-bearing for the paper's purpose as it leaves the central claim of state-of-the-art progress unsupported at the summary level.
Authors: We agree that the abstract would benefit from additional specificity to better support the claims. While challenge overview papers often keep abstracts concise, we will revise it to explicitly mention the primary metrics (PSNR and SSIM for quality; parameter count, FLOPs, and reported latency for efficiency), note comparisons to prior NTIRE low-light challenges and standard lightweight baselines, and briefly characterize the test dataset used for evaluation. This change will be incorporated in the revised manuscript. revision: yes
-
Referee: [Challenge Results] The efficiency claims are based on self-reported factsheet data (parameter count, FLOPs, simulated latency) without evidence of independent verification on target mobile hardware. This is a critical weakness for the assertion of 'practical deployability under limited computational resources', as real-world factors like memory bandwidth and quantization may not be accounted for.
Authors: We acknowledge the reliance on self-reported factsheet data, which is the established protocol in NTIRE challenges to enable broad participation. Latency figures reflect team-reported measurements on representative mobile hardware rather than centralized independent verification, which was not feasible given the scale (27 submissions). We will add explicit language in the revised manuscript stating the self-reported nature of these metrics and include a dedicated discussion of limitations, including potential effects of quantization, memory bandwidth, and hardware variability. This will temper the deployability claims while preserving the overview value of the aggregated results. revision: partial
Circularity Check
Challenge report aggregates external submissions without internal derivations
full rationale
This is a standard NTIRE challenge report that compiles results and factsheets from 17 independent external teams. No equations, fitted parameters, predictions, or derivations appear in the manuscript. The central claims rest on tabulated participant submissions and standard challenge metrics rather than any self-referential construction or author-specific ansatz. Self-citations, if present, are incidental and not load-bearing for the reported outcomes.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
NT-HAZE: A Benchmark Dataset for Re- alistic Night-time Image Dehazing
Radu Ancuti, Codruta Ancuti, Radu Timofte, and Cos- min Ancuti. NT-HAZE: A Benchmark Dataset for Re- alistic Night-time Image Dehazing . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[2]
NTIRE 2026 Nighttime Image Dehazing Challenge Report
Radu Ancuti, Alexandru Brateanu, Florin Vasluianu, Raul Balmez, Ciprian Orhei, Codruta Ancuti, Radu Timofte, Cos- min Ancuti, et al. NTIRE 2026 Nighttime Image Dehazing Challenge Report . InProceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[3]
Mor Avi-Aharon, Assaf Arbelle, and Tammy Riklin Ra- viv. Hue-net: Intensity-based image-to-image translation with differentiable histogram loss functions.arXiv preprint arXiv:1912.06044, 2019. 7
-
[4]
Lyt-net: Lightweight yuv transformer-based network for low-light image enhance- ment.IEEE Signal Processing Letters, 2025
Alexandru Brateanu, Raul Balmez, Adrian Avram, Ciprian Orhei, and Cosmin Ancuti. Lyt-net: Lightweight yuv transformer-based network for low-light image enhance- ment.IEEE Signal Processing Letters, 2025. 7
2025
-
[5]
NTIRE 2026 Challenge on Single Image Re- flection Removal in the Wild: Datasets, Results, and Meth- ods
Jie Cai, Kangning Yang, Zhiyuan Li, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on Single Image Re- flection Removal in the Wild: Datasets, Results, and Meth- ods . InProceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR) Workshops,
2026
-
[6]
Retinexformer: One-stage retinex- based transformer for low-light image enhancement
Yuanhao Cai, Hao Bian, Jing Lin, Haoqian Wang, Radu Tim- ofte, and Yulun Zhang. Retinexformer: One-stage retinex- based transformer for low-light image enhancement. InPro- ceedings of the IEEE/CVF international conference on com- puter vision, pages 12504–12513, 2023. 3, 4
2023
-
[7]
RetinexFormer: One-stage retinex- based transformer for low-light image enhancement
Yuanhao Cai, Hao Bian, Jing Lin, Haoqian Wang, Radu Tim- ofte, and Yulun Zhang. RetinexFormer: One-stage retinex- based transformer for low-light image enhancement. In ICCV, 2023. 4
2023
-
[8]
Gcnet: Non-local networks meet squeeze-excitation net- works and beyond
Yue Cao, Jiarui Xu, Stephen Lin, Fangyun Wei, and Han Hu. Gcnet: Non-local networks meet squeeze-excitation net- works and beyond. InICCV Workshops, pages 0–0, 2019. 8
2019
-
[9]
Two deterministic half-quadratic regular- ization algorithms for computed imaging
Pierre Charbonnier, Laure Blanc-Feraud, Gilles Aubert, and Michel Barlaud. Two deterministic half-quadratic regular- ization algorithms for computed imaging. InICIP, pages 168–172. IEEE, 1994. 5
1994
-
[10]
Simple baselines for image restoration
Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration. InEuropean confer- ence on computer vision, pages 17–33. Springer, 2022. 8
2022
-
[11]
Simple baselines for image restoration
Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration. InProceedings of the European Conference on Computer Vision (ECCV), pages 17–33, 2022. 5
2022
-
[12]
Retinexformer: One-stage retinex-based transformer for low-light image enhancement
Wentao Chen, Yanyun Wu, Wenjing Yang, and Jiaying Liu. Retinexformer: One-stage retinex-based transformer for low-light image enhancement. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 12504–12513, 2023. 5
2023
-
[13]
The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview
Zheng Chen, Kai Liu, Jingkai Wang, Xianglong Yan, Jianze Li, Ziqing Zhang, Jue Gong, Jiatong Li, Lei Sun, Xi- aoyang Liu, Radu Timofte, Yulun Zhang, et al. The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR) W...
2026
-
[14]
Low Light Image Enhancement Challenge at NTIRE 2026
George Ciubotariu, Sharif S M A, Abdur Rehman, Fayaz Ali, Rizwan Ali Naqvi, Marcos Conde, Radu Timofte, et al. Low Light Image Enhancement Challenge at NTIRE 2026 . In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[15]
High FPS Video Frame Interpolation Challenge at NTIRE 2026
George Ciubotariu, Zhuyun Zhou, Yeying Jin, Zongwei Wu, Radu Timofte, et al. High FPS Video Frame Interpolation Challenge at NTIRE 2026 . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[16]
Superpoint: Self-supervised interest point detection and description
Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabi- novich. Superpoint: Self-supervised interest point detection and description. InProceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 224–236, 2018. 3
2018
-
[17]
Adam: A method for stochastic optimiza- tion.(No Title), 2014
Kingma Diederik. Adam: A method for stochastic optimiza- tion.(No Title), 2014. 8
2014
-
[18]
RepVGG: Making VGG-style convnets great again
Xiaohan Ding, Xiangyu Zhang, Ningning Ma, Jungong Han, Guiguang Ding, and Jian Sun. RepVGG: Making VGG-style convnets great again. InCVPR, pages 13733–13742, 2021. 5
2021
-
[19]
NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Chal- lenge Report
Andrei Dumitriu, Aakash Ralhan, Florin Miron, Florin Ta- tui, Radu Tudor Ionescu, Radu Timofte, et al. NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Chal- lenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2
2026
-
[20]
Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al
Omar Elezabi, Marcos V . Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al. Photography Retouching Trans- fer, NTIRE 2026 Challenge: Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[21]
Pearson edu- cation india, 2009
Rafael C Gonzalez.Digital image processing. Pearson edu- cation india, 2009. 6
2009
-
[22]
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces.arXiv preprint arXiv:2312.00752, 2024. 8
work page Pith review arXiv 2024
-
[23]
NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results
Bochen Guan, Jinlong Li, Kangning Yang, Chuang Ke, Jie Cai, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2
2026
-
[24]
NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3)
Ya-nan Guan, Shaonan Zhang, Hang Guo, Yawen Wang, Xinying Fan, Jie Liang, Hui Zeng, Guanyi Qin, Lishen Qu, Tao Dai, Shu-Tao Xia, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3) . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[25]
Zero-reference deep curve estimation for low-light image enhancement
Chunle Guo, Chongyi Li, Jichang Guo, Chen Change Loy, Junhui Hou, Sam Kwong, and Runmin Cong. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF conference on computer vi- sion and pattern recognition, pages 1780–1789, 2020. 7
2020
-
[26]
NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild
Aleksandr Gushchin, Khaled Abud, Ekaterina Shumitskaya, Artem Filippov, Georgii Bychkov, Sergey Lavrushkin, Mikhail Erofeev, Anastasia Antsiferova, Changsheng Chen, Shunquan Tan, Radu Timofte, Dmitriy Vatolin, et al. NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild . InProceedings of the IEEE/CVF Conference on Computer Vision and Pa...
2026
-
[27]
R2rnet: Low-light image enhancement via real-low to real-normal network.Jour- nal of Visual Communication and Image Representation, 90: 103712, 2023
Jiang Hai, Zhu Xuan, Ren Yang, Yutong Hao, Fengzhu Zou, Fang Lin, and Songchen Han. R2rnet: Low-light image enhancement via real-low to real-normal network.Jour- nal of Visual Communication and Image Representation, 90: 103712, 2023. 6
2023
-
[28]
Robust Deepfake De- tection, NTIRE 2026 Challenge: Report
Benedikt Hopf, Radu Timofte, et al. Robust Deepfake De- tection, NTIRE 2026 Challenge: Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[29]
Searching for mobilenetv3
Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, Quoc V Le, and Hartwig Adam. Searching for mobilenetv3. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 1314–1324, 2019. 5
2019
-
[30]
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco An- dreetto, and Hartwig Adam. Mobilenets: Efficient convolu- tional neural networks for mobile vision applications.arXiv preprint arXiv:1704.04861, 2017. 7
work page internal anchor Pith review arXiv 2017
-
[31]
Squeeze-and-excitation net- works
Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation net- works. InCVPR, pages 7132–7141, 2018. 8
2018
-
[32]
Perceptual losses for real-time style transfer and super-resolution
Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision, pages 694–711. Springer, 2016. 7
2016
-
[33]
NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge
Aleksei Khalin, Egor Ershov, Artem Panshin, Sergey Ko- rchagin, Georgiy Lobarev, Arseniy Terekhin, Sofiia Doro- gova, Amir Shamsutdinov, Yasin Mamedov, Bakhtiyar Khalfin, Bogdan Sheludko, Emil Zilyaev, Nikola Bani ´c, Georgy Perevozchikov, Radu Timofte, et al. NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge . In Proceedings of the IEEE/CVF Con...
2026
-
[35]
Adam: A Method for Stochastic Optimization
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization.arXiv preprint arXiv:1412.6980,
work page internal anchor Pith review arXiv
-
[36]
Imagenet classification with deep convolutional neural net- works.Advances in neural information processing systems, 25, 2012
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural net- works.Advances in neural information processing systems, 25, 2012. 4
2012
-
[37]
The retinex theory of color vision.Scientific American, 237(6):108–129, 1977
Edwin H Land. The retinex theory of color vision.Scientific American, 237(6):108–129, 1977. 5
1977
-
[38]
CPGA-Net: Curve point guided attention net- work for low-light image enhancement
Huan Li, Ao Liu, Wenyan Wen, Kaiyan Jiang, Yushuai Chen, and Pinle Liu. CPGA-Net: Curve point guided attention net- work for low-light image enhancement. InProceedings of the IEEE International Conference on Multimedia and Expo (ICME), 2023. 4
2023
-
[39]
The First Challenge on Mobile Real-World Image Super- Resolution at NTIRE 2026: Benchmark Results and Method Overview
Jiatong Li, Zheng Chen, Kai Liu, Jingkai Wang, Zihan Zhou, Xiaoyang Liu, Libo Zhu, Radu Timofte, Yulun Zhang, et al. The First Challenge on Mobile Real-World Image Super- Resolution at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2
2026
-
[40]
NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Models: Datasets, Methods and Results
Xin Li, Jiachao Gong, Xijun Wang, Shiyao Xiong, Bingchen Li, Suhang Yao, Chao Zhou, Zhibo Chen, Radu Timofte, et al. NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Models: Datasets, Methods and Results . InProceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[41]
NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual-Focused Images: Methods and Results
Xin Li, Yeying Jin, Suhang Yao, Beibei Lin, Zhaoxin Fan, Wending Yan, Xin Jin, Zongwei Wu, Bingchen Li, Peishu Shi, Yufei Yang, Yu Li, Zhibo Chen, Bihan Wen, Robby Tan, Radu Timofte, et al. NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual-Focused Images: Methods and Results . InProceedings of the IEEE/CVF Con- ference on Computer...
2026
-
[42]
LightGlue: Local Feature Matching at Light Speed
Philipp Lindenberger, Paul-Edouard Sarlin, and Marc Polle- feys. LightGlue: Local Feature Matching at Light Speed. In ICCV, 2023. 3
2023
-
[43]
Image inpainting for irregular holes using partial convolutions
Guilin Liu, Fitsum A Reda, Kevin J Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro. Image inpainting for irregular holes using partial convolutions. InProceedings of the European Conference on Computer Vision (ECCV), pages 85–100, 2018. 5
2018
-
[44]
The First Chal- lenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview
Kai Liu, Haoyang Yue, Zeli Lin, Zheng Chen, Jingkai Wang, Jue Gong, Radu Timofte, Yulun Zhang, et al. The First Chal- lenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[45]
Conde, et al
Shuhong Liu, Ziteng Cui, Chenyu Bao, Xuangeng Chu, Lin Gu, Bin Ren, Radu Timofte, Marcos V . Conde, et al. 3D Restoration and Reconstruction in Adverse Conditions: Re- alX3D Challenge Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[46]
NTIRE 2026 X- AIGC Quality Assessment Challenge: Methods and Results
Xiaohong Liu, Xiongkuo Min, Guangtao Zhai, Qiang Hu, Jiezhang Cao, Yu Zhou, Wei Sun, Farong Wen, Zitong Xu, Yingjie Zhou, Huiyu Duan, Lu Liu, Jiarui Wang, Siqi Luo, Chunyi Li, Li Xu, Zicheng Zhang, Yue Shi, Yubo Wang, Minghong Zhang, Chunchao Guo, Zhichao Hu, Mingtao Chen, Xiele Wu, Xin Ma, Zhaohe Lv, Yuanhao Xue, Jiaqi Wang, Xinxing Sha, Radu Timofte, et...
2026
-
[47]
Enhancing low-light images: A synthetic data perspective on practical and generalizable so- lutions
Yu Long, Qinghua Lin, Zhihua Wang, Kai Zhang, Jianguo Zhang, and Yuming Fang. Enhancing low-light images: A synthetic data perspective on practical and generalizable so- lutions. InProceedings of the AAAI Conference on Artificial Intelligence, pages 5784–5792, 2025. 1
2025
-
[49]
SGDR: Stochastic Gradient Descent with Warm Restarts
Ilya Loshchilov and Frank Hutter. SGDR: Stochas- tic gradient descent with warm restarts.arXiv preprint arXiv:1608.03983, 2016. 5
work page Pith review arXiv 2016
-
[50]
Decoupled Weight Decay Regularization
Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization.arXiv preprint arXiv:1711.05101, 2019. 4
work page Pith review arXiv 2019
-
[51]
Least squares genera- tive adversarial networks
Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. Least squares genera- tive adversarial networks. InProceedings of the IEEE inter- national conference on computer vision, pages 2794–2802,
-
[52]
NTIRE 2026 Challenge on Video Saliency Predic- tion: Methods and Results
Andrey Moskalenko, Alexey Bryncev, Ivan Kosmynin, Kira Shilovskaya, Mikhail Erofeev, Dmitry Vatolin, Radu Timo- fte, et al. NTIRE 2026 Challenge on Video Saliency Predic- tion: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[53]
NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results
Hyunhee Park, Eunpil Park, Sangmin Lee, Radu Timofte, et al. NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results . InProceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[54]
Pytorch: An im- perative style, high-performance deep learning library.Ad- vances in neural information processing systems, 32, 2019
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An im- perative style, high-performance deep learning library.Ad- vances in neural information processing systems, 32, 2019. 8
2019
-
[55]
NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results
Georgy Perevozchikov, Daniil Vladimirov, Radu Timofte, et al. NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR) Workshops, 2026. 2
2026
-
[56]
NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Professional Image Quality Assessment (Track 1)
Guanyi Qin, Jie Liang, Bingbing Zhang, Lishen Qu, Ya-nan Guan, Hui Zeng, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Professional Image Quality Assessment (Track 1) . InPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[57]
The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results
Xingyu Qiu, Yuqian Fu, Jiawei Geng, Bin Ren, Jiancheng Pan, Zongwei Wu, Hao Tang, Yanwei Fu, Radu Timo- fte, Nicu Sebe, Mohamed Elhoseiny, et al. The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[58]
NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Multi-Exposure Image Fusion in Dynamic Scenes (Track2)
Lishen Qu, Yao Liu, Jie Liang, Hui Zeng, Wen Dai, Ya-nan Guan, Guanyi Qin, Shihao Zhou, Jufeng Yang, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Multi-Exposure Image Fusion in Dynamic Scenes (Track2) . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[59]
The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report
Bin Ren, Hang Guo, Yan Shu, Jiaqi Ma, Ziteng Cui, Shuhong Liu, Guofeng Mei, Lei Sun, Zongwei Wu, Fahad Shahbaz Khan, Salman Khan, Radu Timofte, Yawei Li, et al. The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2
2026
-
[60]
U- net: Convolutional networks for biomedical image segmen- tation
Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U- net: Convolutional networks for biomedical image segmen- tation. InInternational Conference on Medical image com- puting and computer-assisted intervention, pages 234–241. Springer, 2015. 8
2015
-
[61]
U-net: Convolutional networks for biomedical image segmentation
Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. InMICCAI, pages 234–241, 2015. 8
2015
-
[62]
Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al
Tim Seizinger, Florin-Alexandru Vasluianu, Marcos V . Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. The First Controllable Bokeh Rendering Challenge at NTIRE 2026 . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[63]
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan and Andrew Zisserman. Very deep convo- lutional networks for large-scale image recognition.arXiv preprint arXiv:1409.1556, 2014. 7
work page Pith review arXiv 2014
-
[64]
The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results
Lei Sun, Hang Guo, Bin Ren, Shaolin Su, Xian Wang, Danda Pani Paudel, Luc Van Gool, Radu Timofte, Yawei Li, et al. The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[65]
The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results
Lei Sun, Weilun Li, Xian Wang, Zhendong Li, Letian Shi, Dannong Xu, Deheng Zhang, Mengshun Hu, Shuang Guo, Shaolin Su, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Wor...
2026
-
[66]
NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results
Lei Sun, Xiaolong Qian, Qi Jiang, Xian Wang, Yao Gao, Kailun Yang, Kaiwei Wang, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results . InProceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[67]
Restoring images in adverse weather condi- tions via histogram transformer
Shangquan Sun, Wenqi Ren, Xinwei Gao, Rui Wang, and Xiaochun Cao. Restoring images in adverse weather condi- tions via histogram transformer. InEuropean Conference on Computer Vision, pages 111–129. Springer, 2024. 4
2024
-
[68]
Di-retinex: digital-imaging retinex model for low-light image enhancement.International Jour- nal of Computer Vision, 133(12):8293–8314, 2025
Shangquan Sun, Wenqi Ren, Jingyang Peng, Fenglong Song, and Xiaochun Cao. Di-retinex: digital-imaging retinex model for low-light image enhancement.International Jour- nal of Computer Vision, 133(12):8293–8314, 2025. 4
2025
-
[69]
Going deeper with convolutions
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015. 6
2015
-
[70]
Learning- Based Ambient Lighting Normalization: NTIRE 2026 Chal- lenge Results and Findings
Florin-Alexandru Vasluianu, Tim Seizinger, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Learning- Based Ambient Lighting Normalization: NTIRE 2026 Chal- lenge Results and Findings . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[71]
Advances in Single- Image Shadow Removal: Results from the NTIRE 2026 Challenge
Florin-Alexandru Vasluianu, Tim Seizinger, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Advances in Single- Image Shadow Removal: Results from the NTIRE 2026 Challenge . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2
2026
-
[72]
The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results
Jingkai Wang, Jue Gong, Zheng Chen, Kai Liu, Jiatong Li, Yulun Zhang, Radu Timofte, et al. The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2
2026
-
[73]
NTIRE 2026 Challenge on 3D Content Super-Resolution: Methods and Results
Longguang Wang, Yulan Guo, Yingqian Wang, Juncheng Li, Sida Peng, Ye Zhang, Radu Timofte, Minglin Chen, Yi Wang, Qibin Hu, Wenjie Lei, et al. NTIRE 2026 Challenge on 3D Content Super-Resolution: Methods and Results . In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[74]
Chan, Chen Change Loy, and Chao Dong
Xintao Wang, Liangbin Xie, Ke Yu, Kelvin C.K. Chan, Chen Change Loy, and Chao Dong. BasicSR: Open source image and video restoration toolbox.https://github. com/XPixelGroup/BasicSR, 2022. 3
2022
-
[75]
NTIRE 2026 Challenge on Light Field Image Super-Resolution: Methods and Results
Yingqian Wang, Zhengyu Liang, Fengyuan Zhang, Wending Zhao, Longguang Wang, Juncheng Li, Jungang Yang, Radu Timofte, Yulan Guo, et al. NTIRE 2026 Challenge on Light Field Image Super-Resolution: Methods and Results . In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[76]
Multi- scale structural similarity for image quality assessment
Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multi- scale structural similarity for image quality assessment. In The thrity-seventh asilomar conference on signals, systems & computers, 2003, pages 1398–1402. Ieee, 2003. 7
2003
-
[77]
Image quality assessment: from error visibility to structural similarity.IEEE TIP, 13(4):600–612, 2004
Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity.IEEE TIP, 13(4):600–612, 2004. 5
2004
-
[78]
Zhihua Wang, Yu Long, Qinghua Lin, Kai Zhang, Yazhu Zhang, Yuming Fang, Li Liu, and Xiaochun Cao. Towards realistic low-light image enhancement via isp driven data modeling.arXiv preprint arXiv:2504.12204, 2025. 1
-
[79]
Robust low-light image enhancement in the wild via data synthesis and generative diffusion prior.Pattern Recognition, page 113336, 2026
Zhihua Wang, Qinghua Lin, Feiyang Liu, Weixia Zhang, and Wei Zhou. Robust low-light image enhancement in the wild via data synthesis and generative diffusion prior.Pattern Recognition, page 113336, 2026. 1
2026
-
[80]
Deep retinex decomposition for low-light enhancement
Chen Wei, Wenjing Wang, Wenhan Yang, and Jiaying Liu. Deep retinex decomposition for low-light enhancement. InProceedings of the British Machine Vision Conference (BMVC), pages 1–12, 2018. 5
2018
-
[81]
Q-align: Teaching lmms for visual scoring via discrete text-defined levels
Haoning Wu, Zicheng Zhang, Weixia Zhang, Chaofeng Chen, Liang Liao, Chunyi Li, Yixuan Gao, Annan Wang, Erli Zhang, Wenxiu Sun, Qiong Yan, Xiongkuo Min, Guang- tao Zhai, and Weisi Lin. Q-align: Teaching lmms for visual scoring via discrete text-defined levels. InProceedings of the 41st International Conference on Machine Learning, pages 54015–54029, 2024. 6
2024
-
[82]
Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement
Wenhui Wu, Jian Weng, Pingping Zhang, Xu Wang, Wenhan Yang, and Jianmin Jiang. Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. InPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5901–5910, 2022. 5
2022
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.