pith. machine review for the scientific record. sign in

arxiv: 2604.14558 · v1 · submitted 2026-04-16 · 💻 cs.CV

Recognition: unknown

The Fourth Challenge on Image Super-Resolution (times4) at NTIRE 2026: Benchmark Results and Method Overview

Zheng Chen , Kai Liu , Jingkai Wang , Xianglong Yan , Jianze Li , Ziqing Zhang , Jue Gong , Jiatong Li
show 145 more authors
Lei Sun Xiaoyang Liu Radu Timofte Yulun Zhang Jihye Park Yoonjin Im Hyungju Chun Hyunhee Park MinKyu Park Zheng Xie Xiangyu Kong Weijun Yuan Zhan Li Qiurong Song Luen Zhu Fengkai Zhang Xinzhe Zhu Junyang Chen Congyu Wang Yixin Yang Zhaorun Zhou Jiangxin Dong Jinshan Pan Shengwei Wang Jiajie Ou Baiang Li Sizhuo Ma Qiang Gao Jusheng Zhang Jian Wang Keze Wang Yijiao Liu Yingsi Chen Hui Li Yu Wang Congchao Zhu Saeed Ahmad Ik Hyun Lee Jun Young Park Ji Hwan Yoon Kainan Yan Zian Wang Weibo Wang Shihao Zou Chao Dong Wei Zhou Linfeng Li Jaeseong Lee Jaeho Chae Jinwoo Kim Seonjoo Kim Yucong Hong Zhenming Yan Junye Chen Ruize Han Song Wang Yuxuan Jiang Chengxi Zeng Tianhao Peng Fan Zhang David Bull Tongyao Mu Qiong Cao Yifan Wang Youwei Pan Leilei Cao Xiaoping Peng Wei Deng Yifei Chen WenBo Xiong Xian Hu Yuxin Zhang Xiaoyun Cheng Yang Ji Zonghao Chen Zhihao Xue Junqin Hu Nihal Kumar Snehal Singh Tomar Klaus Mueller Surya Vashisth Prateek Shaily Jayant Kumar Hardik Sharma Ashish Negi Sachin Chaudhary Akshay Dudhane Praful Hambarde Amit Shukla Shijun Shi Jiangning Zhang Yong Liu Kai Hu Jing Xu Xianfang Zeng Amitesh M Hariharan S Chia-Ming Lee Yu-Fan Lin Chih-Chung Hsu Nishalini K Sreenath K A Bilel Benjdira Anas M. Ali Wadii Boulila Shuling Zheng Zhiheng Fu Feng Zhang Zhanglu Chen Boyang Yao Nikhil Pathak Aagam Jain Milan Kumar Kishor Upla Vivek Chavda Sarang N S Raghavendra Ramachandra Zhipeng Zhang Qi Wang Shiyu Wang Jiachen Tu Guoyi Xu Yaoxin Jiang Jiajia Liu Yaokun Shi Yuqi Li Chuanguang Yang Weilun Feng Zhuzhi Hong Hao Wu Junming Liu Yingli Tian Amish Bhushan Kulkarni Tejas R R Shet Saakshi M Vernekar Nikhil Akalwadi Kaushik Mallibhat Ramesh Ashok Tabib Uma Mudenagudi Yuwen Pan Tianrun Chen Deyi Ji Qi Zhu Lanyun Zhu Heyan Zhangyi
Authors on Pith no claims yet

Pith reviewed 2026-05-10 11:38 UTC · model grok-4.3

classification 💻 cs.CV
keywords image super-resolutionNTIRE challengebenchmarkPSNRperceptual qualitybicubic downsamplingcomputer vision
0
0 comments X

The pith

The NTIRE 2026 challenge creates a standardized benchmark for four-times image super-resolution using separate tracks for pixel accuracy and visual quality.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper describes the fourth NTIRE image super-resolution challenge at a scaling factor of four. Participants receive low-resolution images produced by bicubic downsampling and must generate high-resolution outputs. The setup splits evaluation into a restoration track that ranks entries by PSNR for fidelity and a perceptual track that ranks them by a visual quality score. With 194 registrations and 31 valid submissions, the report compiles the datasets, protocols, top results, and method summaries. A reader cares because the benchmark supplies a common reference point for measuring progress and identifying open problems in recovering detail from downsampled images.

Core claim

The challenge supplies a unified benchmark and yields insights into current progress and future directions in image super-resolution by evaluating 31 submitted methods on bicubic-downsampled inputs across a restoration track scored by PSNR and a perceptual track scored by visual realism.

What carries the argument

The two-track evaluation system that ranks submissions separately by PSNR for pixel fidelity and by a perceptual score for visual realism on the same set of bicubic ×4 inputs.

If this is right

  • Methods strong on PSNR frequently trade off against perceptual scores, revealing an inherent tension between numerical fidelity and visual appeal.
  • The collected results from 31 teams provide a concrete snapshot of achievable quality at ×4 scale under controlled conditions.
  • The released datasets and protocol become a reference point for comparing new algorithms without re-running the full challenge.
  • Observed patterns in submitted methods highlight recurring techniques such as attention modules or loss combinations that future work can build upon.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Extending the same dual-track format to other degradations like motion blur or sensor noise could test whether the observed trade-offs persist.
  • If winning perceptual-track entries generalize to video frames, they could inform practical upscaling pipelines for streaming or mobile devices.
  • A follow-up experiment applying the same methods to non-bicubic kernels would quantify how much the benchmark's conclusions depend on the exact downsampling operator.

Load-bearing premise

Bicubic downsampling at factor four combined with PSNR and perceptual scores sufficiently represents the real difficulties of practical image super-resolution.

What would settle it

Demonstrating that the top-ranked methods produce visibly inferior results on images degraded by actual camera sensors, noise, or compression rather than pure bicubic downsampling would show the benchmark misses key real-world cases.

Figures

Figures reproduced from arXiv: 2604.14558 by Aagam Jain, Akshay Dudhane, Amish Bhushan Kulkarni, Amitesh M, Amit Shukla, Anas M. Ali, Ashish Negi, Baiang Li, Bilel Benjdira, Boyang Yao, Chao Dong, Chengxi Zeng, Chia-Ming Lee, Chih-Chung Hsu, Chuanguang Yang, Congchao Zhu, Congyu Wang, David Bull, Deyi Ji, Fan Zhang, Fengkai Zhang, Feng Zhang, Guoyi Xu, Hao Wu, Hardik Sharma, Hariharan S, Heyan Zhangyi, Hui Li, Hyungju Chun, Hyunhee Park, Ik Hyun Lee, Jaeho Chae, Jaeseong Lee, Jayant Kumar, Jiachen Tu, Jiajia Liu, Jiajie Ou, Jiangning Zhang, Jiangxin Dong, Jian Wang, Jianze Li, Jiatong Li, Ji Hwan Yoon, Jihye Park, Jingkai Wang, Jing Xu, Jinshan Pan, Jinwoo Kim, Jue Gong, Junming Liu, Junqin Hu, Junyang Chen, Junye Chen, Jun Young Park, Jusheng Zhang, Kai Hu, Kai Liu, Kainan Yan, Kaushik Mallibhat, Keze Wang, Kishor Upla, Klaus Mueller, Lanyun Zhu, Leilei Cao, Lei Sun, Linfeng Li, Luen Zhu, Milan Kumar, MinKyu Park, Nihal Kumar, Nikhil Akalwadi, Nikhil Pathak, Nishalini K, Praful Hambarde, Prateek Shaily, Qiang Gao, Qiong Cao, Qiurong Song, Qi Wang, Qi Zhu, Radu Timofte, Raghavendra Ramachandra, Ramesh Ashok Tabib, Ruize Han, Saakshi M Vernekar, Sachin Chaudhary, Saeed Ahmad, Sarang N S, Seonjoo Kim, Shengwei Wang, Shihao Zou, Shijun Shi, Shiyu Wang, Shuling Zheng, Sizhuo Ma, Snehal Singh Tomar, Song Wang, Sreenath K A, Surya Vashisth, Tejas R R Shet, Tianhao Peng, Tianrun Chen, Tongyao Mu, Uma Mudenagudi, Vivek Chavda, Wadii Boulila, Weibo Wang, Wei Deng, Weijun Yuan, Weilun Feng, Wei Zhou, WenBo Xiong, Xianfang Zeng, Xianglong Yan, Xiangyu Kong, Xian Hu, Xiaoping Peng, Xiaoyang Liu, Xiaoyun Cheng, Xinzhe Zhu, Yang Ji, Yaokun Shi, Yaoxin Jiang, Yifan Wang, Yifei Chen, Yijiao Liu, Yingli Tian, Yingsi Chen, Yixin Yang, Yong Liu, Yoonjin Im, Youwei Pan, Yucong Hong, Yu-Fan Lin, Yulun Zhang, Yuqi Li, Yu Wang, Yuwen Pan, Yuxin Zhang, Yuxuan Jiang, Zhanglu Chen, Zhan Li, Zhaorun Zhou, Zheng Chen, Zheng Xie, Zhenming Yan, Zhihao Xue, Zhiheng Fu, Zhipeng Zhang, Zhuzhi Hong, Zian Wang, Ziqing Zhang, Zonghao Chen.

Figure 1
Figure 1. Figure 1: Team SamsungAICamera 3. Generative backbones, including diffusion and rectified￾flow models, are playing an increasingly important role in perceptual super-resolution. Their effectiveness is further improved by task-specific adaptation tech￾niques such as LoRA tuning, conditional guidance, and structure-aware generation. 4. Explicit conditioning on degradation, structure, and se￾mantic information helps mo… view at source ↗
Figure 3
Figure 3. Figure 3: Team VEPG where LLRR aligns the low-quality input at the specified timestep with the noised high-quality target, LL1 preserves pixel-level fidelity, and LGAN improves visual realism. Since OMGSR is a one-step diffusion framework built on a pretrained model, the team further introduces no￾reference image quality assessment (NR-IQA) losses [34, 69, 78] to enhance perceptual quality. Following this de￾sign, t… view at source ↗
Figure 4
Figure 4. Figure 4: Team SR-Strugglers The final prediction is obtained through a fixed global pixel-wise fusion of the two branch outputs. Specifically, the team combines the HAT-style output and the MSHAT￾based output using a weighted average, where the fusion weight is set to w = 0.04 according to validation-set per￾formance. According to the team, the main challenge is not designing new model components, but rather identi… view at source ↗
Figure 6
Figure 6. Figure 6: Team IK-LAB and therefore produce complementary reconstruction er￾rors, which can be exploited through adaptive fusion. Only the fusion module is trained, while both pretrained back￾bones remain fixed, in order to preserve their original restoration capability and reduce the risk of overfitting. HAT-IQCMix is based on the Hybrid Attention Trans￾former and incorporates image-quality-adaptive conditional mix… view at source ↗
read the original abstract

This paper presents the NTIRE 2026 image super-resolution ($\times$4) challenge, one of the associated competitions of the NTIRE 2026 Workshop at CVPR 2026. The challenge aims to reconstruct high-resolution (HR) images from low-resolution (LR) inputs generated through bicubic downsampling with a $\times$4 scaling factor. The objective is to develop effective super-resolution solutions and analyze recent advances in the field. To reflect the evolving objectives of image super-resolution, the challenge includes two tracks: (1) a restoration track, which emphasizes pixel-wise fidelity and ranks submissions based on PSNR; and (2) a perceptual track, which focuses on visual realism and evaluates results using a perceptual score. A total of 194 participants registered for the challenge, with 31 teams submitting valid entries. This report summarizes the challenge design, datasets, evaluation protocol, main results, and methods of participating teams. The challenge provides a unified benchmark and offers insights into current progress and future directions in image super-resolution.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 2 minor

Summary. The manuscript reports on the NTIRE 2026 Image Super-Resolution ×4 challenge organized as part of the NTIRE 2026 Workshop. It defines two tracks (a PSNR-ranked restoration track and a perceptual-quality track), describes the use of bicubic downsampling at ×4, notes 194 registrations with 31 valid submissions, and summarizes the datasets, evaluation protocol, main results, and overviews of participating methods. The central claim is that the challenge supplies a unified benchmark and yields insights into current progress and future directions in image super-resolution.

Significance. If the reported participation numbers, protocol, and method summaries are accurate, the paper provides a useful community reference point by documenting a large-scale, dual-objective benchmark under fixed conditions. The separation into restoration and perceptual tracks usefully reflects the field's dual goals. Such reports help track incremental advances, though their significance is primarily archival rather than introducing new technical contributions.

minor comments (2)
  1. [Abstract] Abstract: the assertion that the challenge 'offers insights into current progress and future directions' is not accompanied by any comparative analysis with prior NTIRE SR challenges, trend quantification, or explicit forward-looking discussion; this phrasing should be tempered or supported with concrete observations from the results.
  2. The manuscript would benefit from explicit statements of the training/validation/test image counts and any additional constraints (e.g., runtime or parameter limits) applied to submissions, to allow readers to fully interpret the reported scores.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for the careful review and positive evaluation of our manuscript on the NTIRE 2026 Image Super-Resolution ×4 challenge. The referee accurately summarizes the dual-track design, participation statistics, and the archival role of such reports. We note the recommendation for minor revision.

Circularity Check

0 steps flagged

No significant circularity; purely descriptive benchmark report

full rationale

The paper is a factual summary of an external competition (NTIRE 2026 ×4 SR challenge). It reports registration numbers, submission counts, dataset construction via bicubic downsampling, evaluation protocol (PSNR and perceptual score), and lists participating methods without advancing any mathematical derivation, fitted prediction, or causal claim that reduces to its own inputs. No equations, self-definitional steps, or load-bearing self-citations appear in the provided text. The central claim—that the challenge supplies a unified benchmark—is supported directly by the enumerated facts of the event itself and requires no circular reduction.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

This is a report on an empirical challenge with no new theoretical derivations, free parameters, or invented entities.

pith-pipeline@v0.9.0 · 6151 in / 867 out tokens · 24963 ms · 2026-05-10T11:38:28.200842+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

90 extracted references · 7 canonical work pages · 4 internal anchors

  1. [1]

    NT-HAZE: A Benchmark Dataset for Realistic Night-time Image Dehazing

    Radu Ancuti, Codruta Ancuti, Radu Timofte, and Cosmin Ancuti. NT-HAZE: A Benchmark Dataset for Realistic Night-time Image Dehazing . InCVPRW, 2026. 2

  2. [2]

    NTIRE 2026 Nighttime Image Dehazing Challenge Report

    Radu Ancuti, Alexandru Brateanu, Florin Vasluianu, Raul Balmez, Ciprian Orhei, Codruta Ancuti, Radu Timofte, Cos- min Ancuti, et al. NTIRE 2026 Nighttime Image Dehazing Challenge Report . InCVPRW, 2026. 2 2https : / / github . com / zhengchen1999 / NTIRE2026 _ ImageSR _ x4 / releases / download / v1 / NTIRE2026 _ ImageSR_x4_Supplementary_Material.pdf

  3. [3]

    Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets

    Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram V oleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets.arXiv preprint arXiv:2311.15127, 2023. 2

  4. [4]

    Z-Image: An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer

    Huanqia Cai, Sihan Cao, Ruoyi Du, Peng Gao, Steven Hoi, Zhaohui Hou, Shijie Huang, Dengyang Jiang, Xin Jin, Liangchen Li, et al. Z-image: An efficient image generation foundation model with single-stream diffusion transformer. arXiv preprint arXiv:2511.22699, 2025. 8

  5. [5]

    NTIRE 2026 Challenge on Single Image Re- flection Removal in the Wild: Datasets, Results, and Meth- ods

    Jie Cai, Kangning Yang, Zhiyuan Li, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on Single Image Re- flection Removal in the Wild: Datasets, Results, and Meth- ods . InCVPRW, 2026. 2

  6. [6]

    Computer vision ap- plied to super resolution.IEEE Signal Processing Magazine,

    David Capel and Andrew Zisserman. Computer vision ap- plied to super resolution.IEEE Signal Processing Magazine,

  7. [7]

    Attention in atten- tion network for image super-resolution

    Haoyu Chen, Jinjin Gu, and Zhi Zhang. Attention in atten- tion network for image super-resolution. InCVPR, 2022. 2

  8. [8]

    Faithd- iff: Unleashing diffusion priors for faithful image super- resolution

    Junyang Chen, Jinshan Pan, and Jiangxin Dong. Faithd- iff: Unleashing diffusion priors for faithful image super- resolution. InCVPR, 2025. 7

  9. [9]

    Simple baselines for image restoration

    Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration. InECCV, 2022. 6

  10. [10]

    Activating more pixels in image super-resolution transformer

    Xiangyu Chen, Xintao Wang, Wenlong Zhang, Xiangtao Kong, Yu Qiao, Jiantao Zhou, and Chao Dong. Activating more pixels in image super-resolution transformer. InCVPR,

  11. [11]

    Hat: Hybrid attention transformer for image restoration.arXiv preprint arXiv:2309.05239, 2023

    Xiangyu Chen, Xintao Wang, Wenlong Zhang, Xiangtao Kong, Yu Qiao, Jiantao Zhou, and Chao Dong. Hat: Hybrid attention transformer for image restoration.arXiv preprint arXiv:2309.05239, 2023. 6, 8

  12. [12]

    Dual aggregation transformer for image super-resolution

    Zheng Chen, Yulun Zhang, Jinjin Gu, Linghe Kong, Xi- aokang Yang, and Fisher Yu. Dual aggregation transformer for image super-resolution. InICCV, 2023. 8

  13. [13]

    Ntire 2024 challenge on im- age super-resolution (x4): Methods and results

    Zheng Chen, Zongwei Wu, Eduard Zamfir, Kai Zhang, Yu- lun Zhang, Radu Timofte, et al. Ntire 2024 challenge on im- age super-resolution (x4): Methods and results. InCVPRW,

  14. [14]

    Recursive generalization transformer for im- age super-resolution

    Zheng Chen, Yulun Zhang, Jinjin Gu, Linghe Kong, and Xi- aokang Yang. Recursive generalization transformer for im- age super-resolution. InICLR, 2024. 2

  15. [15]

    Ntire 2025 challenge on image super-resolution (x4): Methods and re- sults

    Zheng Chen, Kai Liu, Jue Gong, Jingkai Wang, Lei Sun, Zongwei Wu, Radu Timofte, Yulun Zhang, et al. Ntire 2025 challenge on image super-resolution (x4): Methods and re- sults. InCVPRW, 2025. 2, 3, 4

  16. [16]

    The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview

    Zheng Chen, Kai Liu, Jingkai Wang, Xianglong Yan, Jianze Li, Ziqing Zhang, Jue Gong, Jiatong Li, Lei Sun, Xi- aoyang Liu, Radu Timofte, Yulun Zhang, et al. The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview . InCVPRW,

  17. [17]

    Low Light Image Enhancement Challenge at NTIRE 2026

    George Ciubotariu, Sharif S M A, Abdur Rehman, Fayaz Ali Dharejo, Rizwan Ali Naqvi, Marcos Conde, Radu Tim- ofte, et al. Low Light Image Enhancement Challenge at NTIRE 2026 . InCVPRW, 2026. 2

  18. [18]

    High FPS Video Frame Interpolation Challenge at NTIRE 2026

    George Ciubotariu, Zhuyun Zhou, Yeying Jin, Zongwei Wu, Radu Timofte, et al. High FPS Video Frame Interpolation Challenge at NTIRE 2026 . InCVPRW, 2026. 2

  19. [19]

    Second-order attention network for single image super-resolution

    Tao Dai, Jianrui Cai, Yongbing Zhang, Shu-Tao Xia, and Lei Zhang. Second-order attention network for single image super-resolution. InCVPR, 2019. 2

  20. [20]

    Image quality assessment: Unifying structure and texture similarity.IEEE TPAMI, 2020

    Keyan Ding, Kede Ma, Shiqi Wang, and Eero P Simoncelli. Image quality assessment: Unifying structure and texture similarity.IEEE TPAMI, 2020. 7

  21. [21]

    Learning a deep convolutional network for image super-resolution

    Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Learning a deep convolutional network for image super-resolution. InECCV, 2014. 2

  22. [22]

    Tsd-sr: One-step diffusion with target score distillation for real-world image super-resolution.CVPR, 2025

    Linwei Dong, Qingnan Fan, Yihong Guo, Zhonghao Wang, Qi Zhang, Jinwei Chen, Yawei Luo, and Changqing Zou. Tsd-sr: One-step diffusion with target score distillation for real-world image super-resolution.CVPR, 2025. 2

  23. [23]

    NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Challenge Report

    Andrei Dumitriu, Aakash Ralhan, Florin Miron, Florin Tatui, Radu Tudor Ionescu, Radu Timofte, et al. NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Challenge Report . InCVPRW, 2026. 2

  24. [24]

    Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al

    Omar Elezabi, Marcos V . Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al. Photography Retouching Transfer, NTIRE 2026 Challenge: Report . InCVPRW, 2026. 2

  25. [25]

    Generative adversarial nets

    Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. InNeurIPS,

  26. [26]

    Mamba: Linear-Time Sequence Modeling with Selective State Spaces

    Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces.arXiv preprint arXiv:2312.00752, 2023. 2

  27. [27]

    NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results

    Bochen Guan, Jinlong Li, Kangning Yang, Chuang Ke, Jie Cai, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results . InCVPRW, 2026. 2

  28. [28]

    NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3)

    Ya-nan Guan, Shaonan Zhang, Hang Guo, Yawen Wang, Xinying Fan, Jie Liang, Hui Zeng, Guanyi Qin, Lishen Qu, Tao Dai, Shu-Tao Xia, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3) . InCVPRW, 2026. 2

  29. [29]

    Mambairv2: Attentive state space restoration

    Hang Guo, Yong Guo, Yaohua Zha, Yulun Zhang, Wenbo Li, Tao Dai, Shu-Tao Xia, and Yawei Li. Mambairv2: Attentive state space restoration. InCVPR, 2025. 2

  30. [30]

    MambaIR: A simple baseline for image restoration with state-space model

    Hang Guo, Jinmin Li, Tao Dai, Zhihao Ouyang, Xudong Ren, and Shu-Tao Xia. MambaIR: A simple baseline for image restoration with state-space model. InECCV, 2025. 2

  31. [31]

    NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild

    Aleksandr Gushchin, Khaled Abud, Ekaterina Shumitskaya, Artem Filippov, Georgii Bychkov, Sergey Lavrushkin, Mikhail Erofeev, Anastasia Antsiferova, Changsheng Chen, Shunquan Tan, Radu Timofte, Dmitriy Vatolin, et al. NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild . InCVPRW, 2026. 2

  32. [32]

    Deep residual learning for image recognition

    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. InCVPR,

  33. [33]

    Robust Deepfake De- tection, NTIRE 2026 Challenge: Report

    Benedikt Hopf, Radu Timofte, et al. Robust Deepfake De- tection, NTIRE 2026 Challenge: Report . InCVPRW, 2026. 2

  34. [34]

    Musiq: Multi-scale image quality transformer

    Junjie Ke, Qifei Wang, Yilin Wang, Peyman Milanfar, and Feng Yang. Musiq: Multi-scale image quality transformer. InICCV, 2021. 7

  35. [35]

    NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge

    Aleksei Khalin, Egor Ershov, Artem Panshin, Sergey Ko- rchagin, Georgiy Lobarev, Arseniy Terekhin, Sofiia Doro- gova, Amir Shamsutdinov, Yasin Mamedov, Bakhtiyar Khalfin, Bogdan Sheludko, Emil Zilyaev, Nikola Bani ´c, Georgy Perevozchikov, Radu Timofte, et al. NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge . In CVPRW, 2026. 2

  36. [36]

    Ntire 2022 chal- lenge on efficient super-resolution: Methods and results

    Fahad Shahbaz Khan and Salman Khan. Ntire 2022 chal- lenge on efficient super-resolution: Methods and results. In CVPRW, 2022. 2

  37. [37]

    Murat Tekalp

    Cansu Korkmaz and A. Murat Tekalp. Training transformer models by wavelet losses improves quantitative and visual performance in single image super-resolution. InCVPR,

  38. [38]

    FLUX.2: Frontier Visual Intelligence

    Black Forest Labs. FLUX.2: Frontier Visual Intelligence. https://bfl.ai/blog/flux-2, 2025. 2, 7

  39. [39]

    One diffusion step to real-world super-resolution via flow trajectory distillation

    Jianze Li, Jiezhang Cao, Yong Guo, Wenbo Li, and Yulun Zhang. One diffusion step to real-world super-resolution via flow trajectory distillation. InICML, 2025. 2

  40. [40]

    Distillation-free one-step diffusion for real-world image super-resolution

    Jianze Li, Jiezhang Cao, Zichen Zou, Xiongfei Su, Xin Yuan, Yulun Zhang, Yong Guo, and Xiaokang Yang. Distillation-free one-step diffusion for real-world image super-resolution. InNeurIPS, 2025. 2

  41. [41]

    The First Challenge on Mobile Real-World Image Super- Resolution at NTIRE 2026: Benchmark Results and Method Overview

    Jiatong Li, Zheng Chen, Kai Liu, Jingkai Wang, Zihan Zhou, Xiaoyang Liu, Libo Zhu, Radu Timofte, Yulun Zhang, et al. The First Challenge on Mobile Real-World Image Super- Resolution at NTIRE 2026: Benchmark Results and Method Overview . InCVPRW, 2026. 2

  42. [42]

    NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Models: Datasets, Methods and Results

    Xin Li, Jiachao Gong, Xijun Wang, Shiyao Xiong, Bingchen Li, Suhang Yao, Chao Zhou, Zhibo Chen, Radu Timofte, et al. NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Models: Datasets, Methods and Results . InCVPRW, 2026. 2

  43. [43]

    NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual-Focused Images: Methods and Results

    Xin Li, Yeying Jin, Suhang Yao, Beibei Lin, Zhaoxin Fan, Wending Yan, Xin Jin, Zongwei Wu, Bingchen Li, Peishu Shi, Yufei Yang, Yu Li, Zhibo Chen, Bihan Wen, Robby Tan, Radu Timofte, et al. NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual-Focused Images: Methods and Results . InCVPRW, 2026. 2

  44. [44]

    Lsdir: A large scale dataset for image restoration

    Yawei Li, Kai Zhang, Jingyun Liang, Jiezhang Cao, Ce Liu, Rui Gong, Yulun Zhang, Hao Tang, Yun Liu, Denis Deman- dolx, et al. Lsdir: A large scale dataset for image restoration. InCVPR, 2023. 3

  45. [45]

    The First Chal- lenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview

    Kai Liu, Haoyang Yue, Zeli Lin, Zheng Chen, Jingkai Wang, Jue Gong, Radu Timofte, Yulun Zhang, et al. The First Chal- lenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview . InCVPRW, 2026. 2

  46. [46]

    Conde, et al

    Shuhong Liu, Ziteng Cui, Chenyu Bao, Xuangeng Chu, Lin Gu, Bin Ren, Radu Timofte, Marcos V . Conde, et al. 3D Restoration and Reconstruction in Adverse Conditions: Re- alX3D Challenge Results . InCVPRW, 2026. 2

  47. [47]

    NTIRE 2026 X- AIGC Quality Assessment Challenge: Methods and Results

    Xiaohong Liu, Xiongkuo Min, Guangtao Zhai, Qiang Hu, Jiezhang Cao, Yu Zhou, Wei Sun, Farong Wen, Zitong Xu, Yingjie Zhou, Huiyu Duan, Lu Liu, Jiarui Wang, Siqi Luo, Chunyi Li, Li Xu, Zicheng Zhang, Yue Shi, Yubo Wang, Minghong Zhang, Chunchao Guo, Zhichao Hu, Mingtao Chen, Xiele Wu, Xin Ma, Zhaohe Lv, Yuanhao Xue, Jiaqi Wang, Xinxing Sha, Radu Timofte, et...

  48. [48]

    Swin transformer: Hierarchical vision transformer using shifted windows

    Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, 2021. 2

  49. [49]

    Image super- resolution with cross-scale non-local attention and exhaus- tive self-exemplars mining

    Yiqun Mei, Yuchen Fan, Yuqian Zhou, Lichao Huang, Thomas S Huang, and Humphrey Shi. Image super- resolution with cross-scale non-local attention and exhaus- tive self-exemplars mining. InCVPR, 2020. 2

  50. [50]

    NTIRE 2026 Challenge on Video Saliency Predic- tion: Methods and Results

    Andrey Moskalenko, Alexey Bryncev, Ivan Kosmynin, Kira Shilovskaya, Mikhail Erofeev, Dmitry Vatolin, Radu Timo- fte, et al. NTIRE 2026 Challenge on Video Saliency Predic- tion: Methods and Results . InCVPRW, 2026. 2

  51. [51]

    Single image super-resolution via a holistic attention network

    Ben Niu, Weilei Wen, Wenqi Ren, Xiangde Zhang, Lianping Yang, Shuzhen Wang, Kaihao Zhang, Xiaochun Cao, and Haifeng Shen. Single image super-resolution via a holistic attention network. InECCV, 2020. 2

  52. [52]

    NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results

    Hyunhee Park, Eunpil Park, Sangmin Lee, Radu Timofte, et al. NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results . InCVPRW,

  53. [53]

    NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results

    Georgy Perevozchikov, Daniil Vladimirov, Radu Timofte, et al. NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results . InCVPRW,

  54. [54]

    NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Chal- lenge: Professional Image Quality Assessment (Track 1)

    Guanyi Qin, Jie Liang, Bingbing Zhang, Lishen Qu, Ya-nan Guan, Hui Zeng, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Chal- lenge: Professional Image Quality Assessment (Track 1) . InCVPRW, 2026. 2

  55. [55]

    The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results

    Xingyu Qiu, Yuqian Fu, Jiawei Geng, Bin Ren, Jiancheng Pan, Zongwei Wu, Hao Tang, Yanwei Fu, Radu Timo- fte, Nicu Sebe, Mohamed Elhoseiny, et al. The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results . InCVPRW, 2026. 2

  56. [56]

    NTIRE 2026 The 3rd Restore Any Im- age Model (RAIM) Challenge: Multi-Exposure Image Fu- sion in Dynamic Scenes (Track2)

    Lishen Qu, Yao Liu, Jie Liang, Hui Zeng, Wen Dai, Ya-nan Guan, Guanyi Qin, Shihao Zhou, Jufeng Yang, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Im- age Model (RAIM) Challenge: Multi-Exposure Image Fu- sion in Dynamic Scenes (Track2) . InCVPRW, 2026. 2

  57. [57]

    Hierarchical Text-Conditional Image Generation with CLIP Latents

    Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image gen- eration with clip latents.arXiv preprint arXiv:2204.06125,

  58. [58]

    The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report

    Bin Ren, Hang Guo, Yan Shu, Jiaqi Ma, Ziteng Cui, Shuhong Liu, Guofeng Mei, Lei Sun, Zongwei Wu, Fahad Shahbaz Khan, Salman Khan, Radu Timofte, Yawei Li, et al. The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report . InCVPRW, 2026. 2

  59. [59]

    High-resolution image syn- thesis with latent diffusion models

    Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj¨orn Ommer. High-resolution image syn- thesis with latent diffusion models. InCVPR, 2022. 2

  60. [60]

    Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al

    Tim Seizinger, Florin-Alexandru Vasluianu, Marcos V . Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. The First Controllable Bokeh Rendering Challenge at NTIRE 2026 . InCVPRW, 2026. 2

  61. [61]

    Cardiac image super-resolution with global correspondence using multi-atlas patchmatch

    Wenzhe Shi, Jose Caballero, Christian Ledig, Xia- hai Zhuang, Wenjia Bai, Kanwal Bhatia, Antonio M Simoes Monteiro de Marvao, Tim Dawes, Declan O’Regan, and Daniel Rueckert. Cardiac image super-resolution with global correspondence using multi-atlas patchmatch. In MICCAI, 2013. 2

  62. [62]

    The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results

    Lei Sun, Hang Guo, Bin Ren, Shaolin Su, Xian Wang, Danda Pani Paudel, Luc Van Gool, Radu Timofte, Yawei Li, et al. The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results . InCVPRW, 2026. 2

  63. [63]

    The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results

    Lei Sun, Weilun Li, Xian Wang, Zhendong Li, Letian Shi, Dannong Xu, Deheng Zhang, Mengshun Hu, Shuang Guo, Shaolin Su, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results . In CVPRW, 2026. 2

  64. [64]

    NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results

    Lei Sun, Xiaolong Qian, Qi Jiang, Xian Wang, Yao Gao, Kailun Yang, Kaiwei Wang, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results . InCVPRW, 2026. 2

  65. [65]

    Seven ways to improve example-based single image super resolu- tion

    Radu Timofte, Rasmus Rothe, and Luc Van Gool. Seven ways to improve example-based single image super resolu- tion. InCVPR, 2016. 2, 4, 7, 9

  66. [66]

    Ntire 2017 chal- lenge on single image super-resolution: Methods and results

    Radu Timofte, Eirikur Agustsson, Luc Van Gool, Ming- Hsuan Yang, Lei Zhang, Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, Kyoung Mu Lee, et al. Ntire 2017 chal- lenge on single image super-resolution: Methods and results. InCVPRW, 2017. 2, 3

  67. [67]

    Learning- Based Ambient Lighting Normalization: NTIRE 2026 Chal- lenge Results and Findings

    Florin-Alexandru Vasluianu, Tim Seizinger, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Learning- Based Ambient Lighting Normalization: NTIRE 2026 Chal- lenge Results and Findings . InCVPRW, 2026. 2

  68. [68]

    Advances in Single- Image Shadow Removal: Results from the NTIRE 2026 Challenge

    Florin-Alexandru Vasluianu, Tim Seizinger, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Advances in Single- Image Shadow Removal: Results from the NTIRE 2026 Challenge . InCVPRW, 2026. 2

  69. [69]

    Ex- ploring clip for assessing the look and feel of images

    Jianyi Wang, Kelvin CK Chan, and Chen Change Loy. Ex- ploring clip for assessing the look and feel of images. In AAAI, 2023. 7

  70. [70]

    The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results

    Jingkai Wang, Jue Gong, Zheng Chen, Kai Liu, Jiatong Li, Yulun Zhang, Radu Timofte, et al. The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results . InCVPRW, 2026. 2

  71. [71]

    NTIRE 2026 Challenge on 3D Content Super-Resolution: Methods and Results

    Longguang Wang, Yulan Guo, Yingqian Wang, Juncheng Li, Sida Peng, Ye Zhang, Radu Timofte, Minglin Chen, Yi Wang, Qibin Hu, Wenjie Lei, et al. NTIRE 2026 Challenge on 3D Content Super-Resolution: Methods and Results . In CVPRW, 2026. 2

  72. [72]

    Real-esrgan: Training real-world blind super-resolution with pure synthetic data

    Xintao Wang, Liangbin Xie, Chao Dong, and Ying Shan. Real-esrgan: Training real-world blind super-resolution with pure synthetic data. InICCVW, 2021. 2

  73. [73]

    Sinsr: diffusion-based image super- resolution in a single step

    Yufei Wang, Wenhan Yang, Xinyuan Chen, Yaohui Wang, Lanqing Guo, Lap-Pui Chau, Ziwei Liu, Yu Qiao, Alex C Kot, and Bihan Wen. Sinsr: diffusion-based image super- resolution in a single step. InCVPR, 2024. 2

  74. [74]

    NTIRE 2026 Challenge on Light Field Image Super-Resolution: Methods and Results

    Yingqian Wang, Zhengyu Liang, Fengyuan Zhang, Wending Zhao, Longguang Wang, Juncheng Li, Jungang Yang, Radu Timofte, Yulan Guo, et al. NTIRE 2026 Challenge on Light Field Image Super-Resolution: Methods and Results . In CVPRW, 2026. 2

  75. [75]

    One-step effective diffusion network for real-world image super-resolution

    Rongyuan Wu, Lingchen Sun, Zhiyuan Ma, and Lei Zhang. One-step effective diffusion network for real-world image super-resolution. InNeurIPS, 2024. 2

  76. [76]

    arXiv preprint arXiv:2508.08227 (2025)

    Zhiqiang Wu, Zhaomang Sun, Tong Zhou, Bingtao Fu, Ji Cong, Yitong Dong, Huaqi Zhang, Xuan Tang, Mingsong Chen, and Xian Wei. Omgsr: You only need one mid- timestep guidance for real-world image super-resolution. arXiv preprint arXiv:2508.08227, 2025. 7, 8

  77. [77]

    Efficient Low Light Image Enhancement: NTIRE 2026 Challenge Report

    Jiebin Yan, Chenyu Tu, Qinghua Lin, Zongwei WU, Weixia Zhang, Zhihua Wang, Peibei Cao, Yuming Fang, Xiaoning Liu, Zhuyun Zhou, Radu Timofte, et al. Efficient Low Light Image Enhancement: NTIRE 2026 Challenge Report . In CVPRW, 2026. 2

  78. [78]

    Maniqa: Multi-dimension attention network for no-reference image quality assessment

    Sidi Yang, Tianhe Wu, Shuwei Shi, Shanshan Lao, Yuan Gong, Mingdeng Cao, Jiahao Wang, and Yujiu Yang. Maniqa: Multi-dimension attention network for no-reference image quality assessment. InCVPRW, 2022. 7

  79. [79]

    NTIRE 2026 Challenge on High- Resolution Depth of non-Lambertian Surfaces

    Pierluigi Zama Ramirez, Fabio Tosi, Luigi Di Stefano, Radu Timofte, Alex Costanzino, Matteo Poggi, Samuele Salti, Ste- fano Mattoccia, et al. NTIRE 2026 Challenge on High- Resolution Depth of non-Lambertian Surfaces . InCVPRW,

  80. [80]

    arXiv preprint arXiv:1701.05957 (2017)

    He Zhang, Vishwanath Sindagi, and Vishal M Patel. Im- age de-raining using a conditional generative adversarial net- work.arXiv preprint arXiv:1701.05957, 2017. 2

Showing first 80 references.