pith. machine review for the scientific record. sign in

arxiv: 2604.03198 · v1 · submitted 2026-04-03 · 💻 cs.CV

Recognition: no theorem link

The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report

Authors on Pith no claims yet

Pith reviewed 2026-05-13 20:42 UTC · model grok-4.3

classification 💻 cs.CV
keywords efficient super-resolutionsingle-image super-resolutionNTIRE challengePSNR evaluationruntime reductionparameter efficiencyFLOPs minimizationimage upscaling
0
0 comments X

The pith

Fifteen teams produced valid efficient super-resolution submissions that reach 26.90 dB PSNR on validation and 26.99 dB on test while cutting runtime, parameters, or FLOPs.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper reports results from the NTIRE 2026 challenge on efficient single-image super-resolution. The task required networks that preserve specified PSNR levels on the DIV2K_LSDIR datasets while improving at least one efficiency measure. Ninety-five participants registered and fifteen teams delivered qualifying entries that meet the quality targets with reduced costs. A reader would care because these outcomes indicate that high-quality image upscaling is becoming practical for devices with limited compute and memory.

Core claim

The challenge shows that it is possible to design single-image super-resolution networks achieving around 26.90 dB PSNR on the DIV2K_LSDIR validation set and 26.99 dB on the test set while simultaneously lowering runtime, parameter count, or FLOPs relative to prior solutions.

What carries the argument

The challenge evaluation protocol that enforces minimum PSNR thresholds on the DIV2K_LSDIR datasets together with measurements of runtime, parameters, and FLOPs to rank submissions.

If this is right

  • Super-resolution can now be deployed on edge devices while staying close to the quality of heavier models.
  • The submitted architectures provide concrete baselines for trading minimal quality loss for large efficiency improvements.
  • Future iterations of the challenge can use the current best entries as starting points to push efficiency further.
  • Methods that succeed here are likely to generalize to other low-level vision tasks that also balance quality against compute.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • These efficiency-focused networks could enable real-time upscaling inside cameras or phones without sending data to the cloud.
  • If the same evaluation style is applied to related tasks such as denoising, similar gains in practical performance may appear.
  • The specific dataset and metric choices may need periodic updates to keep pace with changing image distributions in the wild.

Load-bearing premise

The selected PSNR targets and the DIV2K_LSDIR datasets adequately represent real-world image quality needs and that efficiency numbers measured in the challenge will translate into deployment gains.

What would settle it

Running the submitted networks on mobile hardware with diverse real-world images and observing that quality falls below acceptable levels or efficiency gains disappear would falsify the practical value of the reported results.

Figures

Figures reproduced from arXiv: 2604.03198 by Abdelhak Bentaleb, Bin Ren, Chao Ren, Chen Wu, Fahad Shahbaz Khan, Fengguo Li, Fulin Liu, Guanglu Dong, Guofeng Mei, Hang Guo, Haobo Li, Hongbo Fang, Hongying Liu, Hongyuan Yu, Hui Deng, Jiaojiao Yi, Jiaqi Ma, Jie Liu, Jing Hu, Jing Wei, Jingyuan Xia, Jinlin Wu, Keji He, Keye Cao, Lei Sun, Lingxuan Li, Lin Si, Long Peng, Manish Prasad, Matin Fazel, Mengyang Wang, Mingyang Li, Moncef Gabbouj, Pan Gao, Pufan Xu, Qian Wang, Qingliang Liu, Radu Timofte, Rui Chen, Rui Ding, Rui Zheng, Salman Khan, Sen Yang, Shubham Sharma, Shuhong Liu, Shurui Shi, Siyang Yi, Supavadee Aramvith, Tingyi Zhang, Watchara Ruangsang, Wenkai Min, Xing Mou, Xuan Zhang, Yang Cheng, Yan Shu, Yawei Li, Yecheng Lei, Yuning Cui, Zheng Yang, Zitao Dai, Ziteng Cui, Zongang Gao, Zongwei Wu.

Figure 1
Figure 1. Figure 1: Team XiaomiMM: SPANV2 overall architecture. The near-pixel branch (top) provides a pixel-repeat upsampling prior, while five SPABV2 blocks (bottom) extract deep features. The two paths are concatenated (80 ch) and fused by depthwise-separable convolution before PixelShuffle×4 [PITH_FULL_IMAGE:figures/full_fig_p008_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Team XiaomiMM: span attn op fuses the 1×1 atten￾tion convolution, element-wise addition, and element-wise mul￾tiplication into a single CUDA kernel, eliminating 3× redundant DRAM round-trips. low-frequency content (which NN reproduces exactly) ac￾counts for the majority of signal energy in natural images. The weights remain trainable, allowing the branch to in￾corporate local context beyond the center pixe… view at source ↗
Figure 3
Figure 3. Figure 3: Team ZenoSR: The overall architecture of Adaptive Calibration Network(ACN) leading to more effective multi-level feature utilization with negligible overhead. Training strategy. The proposed ACN consists of 4 ACBs with 18 feature channels. They adopt a five-stage training strategy. 1. In the first stage, the model is trained from scratch on the 800 training images of DIV2K [68] and the first 10K images of … view at source ↗
Figure 4
Figure 4. Figure 4: Team CUIT HTT: Overall architecture of the proposed MambaGate-SR. predictions, and Lkd edge is a Laplacian-based edge distil￾lation loss for enhancing high-frequency details. They set λ1 = 0.2 and λ2 = 0.1. The model is trained on a mixed DIV2K+LSDIR dataset [63, 68] with random horizontal flips and rotations. The HR patch size is 256 × 256, the batch size is 16, and Adam is adopted with β1 = 0.9 and β2 = … view at source ↗
Figure 5
Figure 5. Figure 5: Team HAESR: The architecture of HAESR The DFM is used to aggregate complementary local and contextual information, while the HSA further enhances the features through Meso-SA and Global-SA. Inspired by Om￾niSR [73] and HAT [74], the proposed spatial attention HSA adopts extended local sampling and structured global sam￾pling to achieve efficient local-global feature modeling. Un￾like HAB, SHAB does not gen… view at source ↗
Figure 6
Figure 6. Figure 6: Team [PITH_FULL_IMAGE:figures/full_fig_p012_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Team Sunflower: The network architecture of HFENet input. After the multi-scale spatial features are enhanced by the MSLKA module, they are fed into the Entropy Atten￾tion (EA) module to further recalibrate the channel-wise in￾formation. To efficiently measure the information content of each channel, the EA module employs the differential entropy conditioned on a Gaussian distribution, following the design… view at source ↗
Figure 8
Figure 8. Figure 8: Team XuptSR: The whole framework of Variance-Guided Spatial-Channel Context Interaction Network (VSCINet) MSConv GELU Conv - 1 MSRB MSRB MSRB DW-3×3 Channel Concat Conv-1 Conv-1 VCSA GCAB Norm BSConv Split GELU Conv-1 Q K Norm Split DW-3×3 Split SoftMax Normalize Conv-1 Conv-1 Conv-1 Conv-1 GELU Upsamping Conv-1 Normalize GAP DW￾3×3 GSA GDFN DW 3×3 Channel Split Channel Concat Conv-1 DW 5×3 DW 3×5 DW 3×3 E… view at source ↗
Figure 9
Figure 9. Figure 9: Team XuptSR: The details of each component. SCAB: Spatial Channel Attention Block; GSA: Gated Self-Attention; GDFN: Gated Depthwise Feed-Forward Network; GCAB:Global Channel Attention Block; VCSA: Variance-guided Contextual Spatial Attention; MSConv: Multi-scale Convolution; MSRB: Multi-scale Residual Block dependencies through a gated self-attention mechanism and adopts a local chunking strategy at test t… view at source ↗
Figure 10
Figure 10. Figure 10: Team WMESR: Overview of DWMamba. training steps are as follows: 1. Pretraining on the DIV2K [68] and the first 10K im￾ages of the LSDIR [63]. HR patches of size 256 × 256 are randomly cropped from HR images, and the mini￾batch size is set to 64. The model is trained by mini￾mizing the L1 loss function with the Adam optimizer. The initial learning rate is set to 3 × 10−3 and halved at {100k, 500k, 800k, 90… view at source ↗
Figure 11
Figure 11. Figure 11: Team VARH-AI: The proposed model architecture overview. (≥ 2D weights), it applies a decoupled Newton-Schulz orthogonalization update (Muon) over 5 iterative steps: A &= X_k X_k^\top , \label {eq:team15_ns1} \\ X_{k+1} &= a\,X_k + (b\,A + c\,A^2)\,X_k, \label {eq:team15_ns2} (14) where a = 3.4445, b = −4.7750, c = 2.0315. Parameters are updated with scaled Nesterov momentum: W_{t+1} = W_t\,(1 - \eta \lamb… view at source ↗
Figure 12
Figure 12. Figure 12: Team XSR: Network architecture of the proposed AMCANet. To guide multi-scale extraction, As is downsampled (stride=1 for i = 0, stride=4i for i > 0), processed by a convolutional group F (Eq. 17), and upsampled.A residual connection yields the refined feature Am. Finally, an atten￾tion map is generated and applied to Vs: H_i=\text {Sigmoid}(\text {Conv}_{1\times 1}(A_m)) \odot V_s . \label {eq:Hi} (18) Ou… view at source ↗
Figure 13
Figure 13. Figure 13: Team Just Try: Overall structure of ERRN2 bination of affinity loss [84] and L1 loss. The initial learning rate was set to 1×10−4 , and a cosine annealing learning rate scheduling strategy was employed for 100 training epochs. 4. Student network fine-tuning. High-resolution image patches of size 512×512 were randomly cropped from the dataset, and the student network was fine-tuned with a batch size of 32 … view at source ↗
Figure 14
Figure 14. Figure 14: Team MDAP: SAFMN-Deep15 overall architecture. The deep feature extraction path consists of 15 sequential SAFM blocks (utilizing multi-scale spatial pooling and GELU gating). The output is refined and added via a global residual before being finally upsam￾pled by PixelShuffle(×4). • Loss function: Charbonnier Loss + Spatial Laplacian Frequency Loss (λ = 0.05). • Optimizer: AdamW (β1 = 0.9, β2 = 0.999, weig… view at source ↗
read the original abstract

This paper reviews the NTIRE 2026 challenge on efficient single-image super-resolution with a focus on the proposed solutions and results. The aim of this challenge is to devise a network that reduces one or several aspects, such as runtime, parameters, and FLOPs, while maintaining PSNR of around 26.90 dB on the DIV2K_LSDIR_valid dataset, and 26.99 dB on the DIV2K_LSDIR_test dataset. The challenge had 95 registered participants, and 15 teams made valid submissions. They gauge the state-of-the-art results for efficient single-image super-resolution.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 2 minor

Summary. The manuscript reports on the Eleventh NTIRE 2026 Efficient Super-Resolution Challenge. It states that the challenge attracted 95 registered participants, of whom 15 teams submitted valid entries. The objective was to produce networks that achieve PSNR values of approximately 26.90 dB on DIV2K_LSDIR_valid and 26.99 dB on DIV2K_LSDIR_test while reducing at least one of runtime, parameter count, or FLOPs. The paper presents the submitted solutions and final results to document the current state of the art in efficient single-image super-resolution.

Significance. If the reported PSNR and efficiency numbers are confirmed by the organizers' verification process, the report supplies a concrete, up-to-date benchmark for the efficiency-quality trade-off in single-image super-resolution. Such benchmarks are useful for guiding subsequent research toward deployable models on resource-constrained platforms.

minor comments (2)
  1. [Abstract and Introduction] The abstract and introduction would benefit from a brief table or bullet list summarizing the top three submissions by each efficiency metric (runtime, parameters, FLOPs) to give readers an immediate overview of the achieved gains.
  2. [Challenge Setup] The description of the evaluation protocol should explicitly state the hardware platform, batch size, and input resolution used for runtime and FLOP measurements so that future comparisons remain reproducible.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for the positive review and the recommendation to accept the manuscript. The report documents the outcomes of the NTIRE 2026 Efficient Super-Resolution Challenge, including the 15 valid submissions that achieve the target PSNR while improving efficiency metrics.

Circularity Check

0 steps flagged

No derivation or modeling chain present; purely descriptive challenge report

full rationale

The manuscript is a standard challenge report that records the outcomes of 15 external team submissions meeting fixed PSNR targets (≈26.90 dB valid, 26.99 dB test) on the DIV2K_LSDIR datasets while improving at least one efficiency metric. No equations, predictions, ansatzes, uniqueness theorems, or first-principles derivations are advanced; the text simply tabulates submitted results and states the challenge rules. Because no load-bearing step reduces to a fitted input, self-citation, or renamed empirical pattern, the circularity score is 0.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

The report contains no mathematical derivations, fitted parameters or postulated entities; it rests only on the pre-existing challenge rules and the submitted participant outputs.

pith-pipeline@v0.9.0 · 5638 in / 984 out tokens · 27112 ms · 2026-05-13T20:42:53.770678+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. CLIP-Guided Data Augmentation for Night-Time Image Dehazing

    cs.CV 2026-04 unverdicted novelty 5.0

    CLIP-guided selection of external data plus staged NAFNet training and inference fusion provides an effective pipeline for nighttime image dehazing in the NTIRE 2026 challenge.

  2. Training-Free Model Ensemble for Single-Image Super-Resolution via Strong-Branch Compensation

    cs.CV 2026-04 unverdicted novelty 4.0

    A dual-branch training-free ensemble fuses a hybrid attention network with a Mamba-based model via weighted combination to enhance super-resolution PSNR on DIV2K x4.

  3. Dual-Branch Remote Sensing Infrared Image Super-Resolution

    cs.CV 2026-04 unverdicted novelty 4.0

    Dual-branch fusion of HAT-L and MambaIRv2-L with eight-way ensemble and equal-weight averaging outperforms single branches on PSNR, SSIM, and challenge score for infrared super-resolution.

  4. Beyond Model Design: Data-Centric Training and Self-Ensemble for Gaussian Color Image Denoising

    cs.CV 2026-04 unverdicted novelty 3.0

    Expanding training data diversity, adopting two-stage optimization, and applying geometric self-ensemble raises Restormer performance on Gaussian color denoising at sigma=50 by 3.366 dB PSNR on the NTIRE 2026 validation set.

Reference graph

Works this paper leans on

87 extracted references · 87 canonical work pages · cited by 4 Pith papers · 1 internal anchor

  1. [1]

    Mat: Multi-range attention transformer for efficient image super-resolution.TCSVT,

    Chengxing Xie, Xiaoming Zhang, Linze Li, Yuqian Fu, Biao Gong, Tianrui Li, and Kai Zhang. Mat: Multi-range attention transformer for efficient image super-resolution.TCSVT,

  2. [2]

    Perceive-ir: Learning to perceive degrada- tion better for all-in-one image restoration.TIP, 2025

    Xu Zhang, Jiaqi Ma, Guoli Wang, Qian Zhang, Huan Zhang, and Lefei Zhang. Perceive-ir: Learning to perceive degrada- tion better for all-in-one image restoration.TIP, 2025

  3. [3]

    Terrascope: Pixel- grounded visual reasoning for earth observation

    Yan Shu, Bin Ren, Zhitong Xiong, Xiao Xiang Zhu, Beg ¨um Demir, Nicu Sebe, and Paolo Rota. Terrascope: Pixel- grounded visual reasoning for earth observation. InCVPR, 2026

  4. [4]

    Sharing key semantics in transformer makes efficient image restoration.NeurIPS, 37:7427–7463, 2024

    Bin Ren, Yawei Li, Jingyun Liang, Rakesh Ranjan, Mengyuan Liu, Rita Cucchiara, Luc V Gool, Ming-Hsuan Yang, and Nicu Sebe. Sharing key semantics in transformer makes efficient image restoration.NeurIPS, 37:7427–7463, 2024

  5. [5]

    Degradation-aware residual-conditioned optimal trans- port for unified image restoration.TPAMI, 2025

    Xiaole Tang, Xiang Gu, Xiaoyi He, Xin Hu, and Jian Sun. Degradation-aware residual-conditioned optimal trans- port for unified image restoration.TPAMI, 2025

  6. [6]

    Langhops: Language grounded hierarchical open-vocabulary part segmentation

    Yang Miao, Jan-Nico Zaech, Xi Wang, Fabien Despinoy, Danda Pani Paudel, and Luc Van Gool. Langhops: Language grounded hierarchical open-vocabulary part segmentation. In NeurIPS, 2025

  7. [7]

    Star: Spatial-temporal augmentation with text-to- video models for real-world video super-resolution

    Rui Xie, Yinhong Liu, Penghao Zhou, Chen Zhao, Jun Zhou, Kai Zhang, Zhenyu Zhang, Jian Yang, Zhenheng Yang, and Ying Tai. Star: Spatial-temporal augmentation with text-to- video models for real-world video super-resolution. InICCV, pages 17108–17118, 2025

  8. [8]

    Multimodal multi- head convolutional attention with various kernel sizes for medical image super-resolution

    Mariana-Iuliana Georgescu, Radu Tudor Ionescu, Andreea- Iuliana Miron, Olivian Savencu, Nicolae-C ˘at˘alin Ristea, Nicolae Verga, and Fahad Shahbaz Khan. Multimodal multi- head convolutional attention with various kernel sizes for medical image super-resolution. InCVPR, pages 2195– 2205, 2023

  9. [9]

    Frequency-assisted mamba for remote sensing image super-resolution.TMM, 27:1783– 1796, 2024

    Yi Xiao, Qiangqiang Yuan, Kui Jiang, Yuzeng Chen, Qiang Zhang, and Chia-Wen Lin. Frequency-assisted mamba for remote sensing image super-resolution.TMM, 27:1783– 1796, 2024. 1

  10. [10]

    Vision mamba: Efficient visual representation learning with bidirectional state space model

    Lianghui Zhu, Bencheng Liao, Qian Zhang, Xinlong Wang, Wenyu Liu, and Xinggang Wang. Vision mamba: Efficient visual representation learning with bidirectional state space model. InICML, 2024. 1

  11. [11]

    Rethinking the value of network pruning

    Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. In ICLR, 2019

  12. [12]

    Open-world deepfake attribution via confidence-aware asymmetric learning

    Haiyang Zheng, Nan Pu, Wenjing Li, Teng Long, Nicu Sebe, and Zhun Zhong. Open-world deepfake attribution via confidence-aware asymmetric learning. InAAAI, volume 40, pages 13378–13386, 2026

  13. [13]

    Distilling efficient vision transformers from cnns for seman- tic segmentation.PR, 158:111029, 2025

    Xu Zheng, Yunhao Luo, Pengyuan Zhou, and Lin Wang. Distilling efficient vision transformers from cnns for seman- tic segmentation.PR, 158:111029, 2025

  14. [14]

    Masked jigsaw puzzle: A versatile position embedding for vision transformers

    Bin Ren, Yahui Liu, Yue Song, Wei Bi, Rita Cucchiara, Nicu Sebe, and Wei Wang. Masked jigsaw puzzle: A versatile position embedding for vision transformers. InCVPR, pages 20382–20391, 2023

  15. [15]

    On compressing deep models by low rank and sparse decom- position

    Xiyu Yu, Tongliang Liu, Xinchao Wang, and Dacheng Tao. On compressing deep models by low rank and sparse decom- position. InCVPR, pages 7370–7379, 2017

  16. [16]

    Shapes- plat: A large-scale dataset of gaussian splats and their self- supervised pretraining

    Qi Ma, Yue Li, Bin Ren, Nicu Sebe, Ender Konukoglu, Theo Gevers, Luc Van Gool, and Danda Pani Paudel. Shapes- plat: A large-scale dataset of gaussian splats and their self- supervised pretraining. In3DV, 2025

  17. [17]

    Denoising diffusion probabilistic models for action-conditioned 3d motion generation

    Mengyi Zhao, Mengyuan Liu, Bin Ren, Shuling Dai, and Nicu Sebe. Denoising diffusion probabilistic models for action-conditioned 3d motion generation. InICASSP, pages 4225–4229. IEEE, 2024

  18. [18]

    Onestory: Coherent multi-shot video generation with adaptive memory.CVPR, 2026

    Zhaochong An, Menglin Jia, Haonan Qiu, Zijian Zhou, Xi- aoke Huang, Zhiheng Liu, Weiming Ren, Kumara Kahatapi- tiya, Ding Liu, Sen He, et al. Onestory: Coherent multi-shot video generation with adaptive memory.CVPR, 2026

  19. [19]

    Chorus: Multi- teacher pretraining for holistic 3d gaussian scene encoding

    Yue Li, Qi Ma, Runyi Yang, Mengjiao Ma, Bin Ren, Nikola Popovic, Nicu Sebe, Theo Gevers, Luc Van Gool, Danda Pani Paudel, and Martin R Oswald. Chorus: Multi- teacher pretraining for holistic 3d gaussian scene encoding. InCVPR, 2026

  20. [20]

    Inverse virtual try-on: Gen- erating multi-category product-style images from clothed in- dividuals.ICLR, 2026

    Davide Lobba, Fulvio Sanguigni, Bin Ren, Marcella Cornia, Rita Cucchiara, and Nicu Sebe. Inverse virtual try-on: Gen- erating multi-category product-style images from clothed in- dividuals.ICLR, 2026

  21. [21]

    Aim 2025 challenge on inverse tone mapping report: Methods and results

    Chao Wang, Francesco Banterle, Bin Ren, Radu Timofte, Xin Lu, Yufeng Peng, Chengjie Ge, Zhijing Sun, Ziang Zhou, Zihao Li, et al. Aim 2025 challenge on inverse tone mapping report: Methods and results. InICCVW, pages 5571–5584, 2025. 1

  22. [22]

    Swift parameter-free attention network for efficient super- resolution

    Cheng Wan, Hongyuan Yu, Zhiqi Li, Yihang Chen, Ya- jun Zou, Yuqing Liu, Xuanwu Yin, and Kunlong Zuo. Swift parameter-free attention network for efficient super- resolution. InCVPR, pages 6246–6256, 2024. 2, 3, 7, 9, 10, 12, 16, 19, 20

  23. [23]

    Robust Deepfake De- tection, NTIRE 2026 Challenge: Report

    Benedikt Hopf, Radu Timofte, et al. Robust Deepfake De- tection, NTIRE 2026 Challenge: Report . InCVPR (CVPR) Workshops, 2026. 2

  24. [24]

    NTIRE 2026 Challenge on High- Resolution Depth of non-Lambertian Surfaces

    Pierluigi Zama Ramirez, Fabio Tosi, Luigi Di Stefano, Radu Timofte, Alex Costanzino, Matteo Poggi, Samuele Salti, Ste- fano Mattoccia, et al. NTIRE 2026 Challenge on High- Resolution Depth of non-Lambertian Surfaces . InCVPR (CVPR) Workshops, 2026. 2

  25. [25]

    NTIRE 2026 The 3rd Restore Any Im- age Model (RAIM) Challenge: Multi-Exposure Image Fu- sion in Dynamic Scenes (Track2)

    Lishen Qu, Yao Liu, Jie Liang, Hui Zeng, Wen Dai, Ya-nan Guan, Guanyi Qin, Shihao Zhou, Jufeng Yang, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Im- age Model (RAIM) Challenge: Multi-Exposure Image Fu- sion in Dynamic Scenes (Track2) . InCVPR (CVPR) Work- shops, 2026. 2

  26. [26]

    NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3)

    Ya-nan Guan, Shaonan Zhang, Hang Guo, Yawen Wang, Xinying Fan, Jie Liang, Hui Zeng, Guanyi Qin, Lishen Qu, Tao Dai, Shu-Tao Xia, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3) . InCVPR (CVPR) Workshops, 2026. 2

  27. [27]

    NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Professional Image Quality Assessment (Track 1)

    Guanyi Qin, Jie Liang, Bingbing Zhang, Lishen Qu, Ya-nan Guan, Hui Zeng, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Professional Image Quality Assessment (Track 1) . InCVPR (CVPR) Workshops, 2026. 2

  28. [28]

    NTIRE 2026 Challenge on Light Field Image Super-Resolution: Methods and Results

    Yingqian Wang, Zhengyu Liang, Fengyuan Zhang, Wending Zhao, Longguang Wang, Juncheng Li, Jungang Yang, Radu Timofte, Yulan Guo, et al. NTIRE 2026 Challenge on Light Field Image Super-Resolution: Methods and Results . In CVPR (CVPR) Workshops, 2026. 2

  29. [29]

    NTIRE 2026 Chal- lenge on 3D Content Super-Resolution: Methods and Results

    Longguang Wang, Yulan Guo, Yingqian Wang, Juncheng Li, Sida Peng, Ye Zhang, Radu Timofte, Minglin Chen, Yi Wang, Qibin Hu, Wenjie Lei, et al. NTIRE 2026 Chal- lenge on 3D Content Super-Resolution: Methods and Results . InCVPR (CVPR) Workshops, 2026. 2

  30. [30]

    NTIRE 2026 Challenge on Bitstream-Corrupted Video Restoration: Meth- ods and Results

    Wenbin Zou, Tianyi Liu, Kejun Wu, Huiping Zhuang, Zong- wei Wu, Zhuyun Zhou, Radu Timofte, et al. NTIRE 2026 Challenge on Bitstream-Corrupted Video Restoration: Meth- ods and Results . InCVPR (CVPR) Workshops, 2026. 2

  31. [31]

    NTIRE 2026 X- AIGC Quality Assessment Challenge: Methods and Results

    Xiaohong Liu, Xiongkuo Min, Guangtao Zhai, Qiang Hu, Jiezhang Cao, Yu Zhou, Wei Sun, Farong Wen, Zitong Xu, Yingjie Zhou, Huiyu Duan, Lu Liu, Jiarui Wang, Siqi Luo, Chunyi Li, Li Xu, Zicheng Zhang, Yue Shi, Yubo Wang, Minghong Zhang, Chunchao Guo, Zhichao Hu, Mingtao Chen, Xiele Wu, Xin Ma, Zhaohe Lv, Yuanhao Xue, Jiaqi Wang, Xinxing Sha, Radu Timofte, et...

  32. [32]

    Advances in Single-Image Shadow Removal: Results from the NTIRE 2026 Challenge

    Florin-Alexandru Vasluianu, Tim Seizinger, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Advances in Single-Image Shadow Removal: Results from the NTIRE 2026 Challenge. InCVPR (CVPR) Workshops, 2026. 2

  33. [33]

    Learning- Based Ambient Lighting Normalization: NTIRE 2026 Chal- lenge Results and Findings

    Florin-Alexandru Vasluianu, Tim Seizinger, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Learning- Based Ambient Lighting Normalization: NTIRE 2026 Chal- lenge Results and Findings. InCVPR (CVPR) Workshops,

  34. [34]

    Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al

    Tim Seizinger, Florin-Alexandru Vasluianu, Marcos V . Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. The First Controllable Bokeh Rendering Chal- lenge at NTIRE 2026. InCVPR (CVPR) Workshops, 2026. 2

  35. [35]

    NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Challenge Report

    Andrei Dumitriu, Aakash Ralhan, Florin Miron, Florin Tatui, Radu Tudor Ionescu, Radu Timofte, et al. NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Challenge Report. InCVPR (CVPR) Workshops, 2026. 2

  36. [36]

    Low Light Image Enhancement Challenge at NTIRE 2026

    George Ciubotariu, Sharif S M A, Abdur Rehman, Fayaz Ali, Rizwan Ali Naqvi, Marcos Conde, Radu Timofte, et al. Low Light Image Enhancement Challenge at NTIRE 2026 . In CVPR (CVPR) Workshops, 2026. 2

  37. [37]

    High FPS Video Frame Interpolation Challenge at NTIRE 2026

    George Ciubotariu, Zhuyun Zhou, Yeying Jin, Zongwei Wu, Radu Timofte, et al. High FPS Video Frame Interpolation Challenge at NTIRE 2026. InCVPR (CVPR) Workshops,

  38. [38]

    NT-HAZE: A Benchmark Dataset for Realistic Night-time Image Dehazing

    Radu Ancuti, Codruta Ancuti, Radu Timofte, and Cosmin Ancuti. NT-HAZE: A Benchmark Dataset for Realistic Night-time Image Dehazing. InCVPR (CVPR) Workshops,

  39. [39]

    NTIRE 2026 Nighttime Image Dehazing Challenge Report

    Radu Ancuti, Alexandru Brateanu, Florin Vasluianu, Raul Balmez, Ciprian Orhei, Codruta Ancuti, Radu Timofte, Cos- min Ancuti, et al. NTIRE 2026 Nighttime Image Dehazing Challenge Report. InCVPR (CVPR) Workshops, 2026. 2

  40. [40]

    NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results

    Georgy Perevozchikov, Daniil Vladimirov, Radu Timofte, et al. NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results. InCVPR (CVPR) Workshops, 2026. 2

  41. [41]

    NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Models: Datasets, Methods and Results

    Xin Li, Jiachao Gong, Xijun Wang, Shiyao Xiong, Bingchen Li, Suhang Yao, Chao Zhou, Zhibo Chen, Radu Timofte, et al. NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Models: Datasets, Methods and Results . InCVPR (CVPR) Workshops, 2026. 2

  42. [42]

    NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual-Focused Images: Methods and Results

    Xin Li, Yeying Jin, Suhang Yao, Beibei Lin, Zhaoxin Fan, Wending Yan, Xin Jin, Zongwei Wu, Bingchen Li, Peishu Shi, Yufei Yang, Yu Li, Zhibo Chen, Bihan Wen, Robby Tan, Radu Timofte, et al. NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual-Focused Images: Methods and Results . InCVPR (CVPR) Workshops, 2026. 2

  43. [43]

    The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Bench- mark Results and Method Overview

    Zheng Chen, Kai Liu, Jingkai Wang, Xianglong Yan, Jianze Li, Ziqing Zhang, Jue Gong, Jiatong Li, Lei Sun, Xiaoyang Liu, Radu Timofte, Yulun Zhang, et al. The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Bench- mark Results and Method Overview . InCVPR (CVPR) Workshops, 2026. 2

  44. [44]

    Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al

    Omar Elezabi, Marcos V . Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al. Photography Retouching Transfer, NTIRE 2026 Challenge: Report . InCVPR (CVPR) Work- shops, 2026. 2

  45. [45]

    The First Challenge on Mobile Real-World Image Super- Resolution at NTIRE 2026: Benchmark Results and Method Overview

    Jiatong Li, Zheng Chen, Kai Liu, Jingkai Wang, Zihan Zhou, Xiaoyang Liu, Libo Zhu, Radu Timofte, Yulun Zhang, et al. The First Challenge on Mobile Real-World Image Super- Resolution at NTIRE 2026: Benchmark Results and Method Overview. InCVPR (CVPR) Workshops, 2026. 2

  46. [46]

    The First Chal- lenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview

    Kai Liu, Haoyang Yue, Zeli Lin, Zheng Chen, Jingkai Wang, Jue Gong, Radu Timofte, Yulun Zhang, et al. The First Chal- lenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview. InCVPR (CVPR) Workshops, 2026. 2

  47. [47]

    NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild

    Aleksandr Gushchin, Khaled Abud, Ekaterina Shumitskaya, Artem Filippov, Georgii Bychkov, Sergey Lavrushkin, Mikhail Erofeev, Anastasia Antsiferova, Changsheng Chen, Shunquan Tan, Radu Timofte, Dmitriy Vatolin, et al. NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild . InCVPR (CVPR) Workshops, 2026. 2

  48. [48]

    The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results

    Xingyu Qiu, Yuqian Fu, Jiawei Geng, Bin Ren, Jiancheng Pan, Zongwei Wu, Hao Tang, Yanwei Fu, Radu Timo- fte, Nicu Sebe, Mohamed Elhoseiny, et al. The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results. InCVPR (CVPR) Work- shops, 2026. 2

  49. [49]

    NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results

    Bochen Guan, Jinlong Li, Kangning Yang, Chuang Ke, Jie Cai, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results . InCVPR (CVPR) Workshops, 2026. 2

  50. [50]

    The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results

    Jingkai Wang, Jue Gong, Zheng Chen, Kai Liu, Jiatong Li, Yulun Zhang, Radu Timofte, et al. The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results . InCVPR (CVPR) Workshops, 2026. 2

  51. [51]

    NTIRE 2026 Challenge on Single Image Re- flection Removal in the Wild: Datasets, Results, and Meth- ods

    Jie Cai, Kangning Yang, Zhiyuan Li, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on Single Image Re- flection Removal in the Wild: Datasets, Results, and Meth- ods . InCVPR (CVPR) Workshops, 2026. 2

  52. [52]

    NTIRE 2026 Challenge Report on Anomaly Detection of Face Enhancement for UGC Images

    Yan Zhong, Qiufang Ma, Zhen Wang, Tingting Jiang, Radu Timofte, et al. NTIRE 2026 Challenge Report on Anomaly Detection of Face Enhancement for UGC Images . InCVPR (CVPR) Workshops, 2026. 2

  53. [53]

    NTIRE 2026 Challenge on Video Saliency Predic- tion: Methods and Results

    Andrey Moskalenko, Alexey Bryncev, Ivan Kosmynin, Kira Shilovskaya, Mikhail Erofeev, Dmitry Vatolin, Radu Timo- fte, et al. NTIRE 2026 Challenge on Video Saliency Predic- tion: Methods and Results . InCVPR (CVPR) Workshops,

  54. [54]

    The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report

    Bin Ren, Hang Guo, Yan Shu, Jiaqi Ma, Ziteng Cui, Shuhong Liu, Guofeng Mei, Lei Sun, Zongwei Wu, Fahad Shahbaz Khan, Salman Khan, Radu Timofte, Yawei Li, et al. The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report . InCVPR (CVPR) Workshops, 2026. 2

  55. [55]

    Conde, et al

    Shuhong Liu, Ziteng Cui, Chenyu Bao, Xuangeng Chu, Lin Gu, Bin Ren, Radu Timofte, Marcos V . Conde, et al. 3D Restoration and Reconstruction in Adverse Conditions: Re- alX3D Challenge Results . InCVPR (CVPR) Workshops,

  56. [56]

    The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results

    Lei Sun, Hang Guo, Bin Ren, Shaolin Su, Xian Wang, Danda Pani Paudel, Luc Van Gool, Radu Timofte, Yawei Li, et al. The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results . InCVPR (CVPR) Workshops, 2026. 2

  57. [57]

    NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results

    Lei Sun, Xiaolong Qian, Qi Jiang, Xian Wang, Yao Gao, Kailun Yang, Kaiwei Wang, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results . InCVPR (CVPR) Workshops, 2026. 2

  58. [58]

    The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results

    Lei Sun, Weilun Li, Xian Wang, Zhendong Li, Letian Shi, Dannong Xu, Deheng Zhang, Mengshun Hu, Shuang Guo, Shaolin Su, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results . In CVPR (CVPR) Workshops, 2026. 2

  59. [59]

    NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results

    Hyunhee Park, Eunpil Park, Sangmin Lee, Radu Timofte, et al. NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results . InCVPR (CVPR) Workshops, 2026. 2

  60. [60]

    NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge

    Aleksei Khalin, Egor Ershov, Artem Panshin, Sergey Ko- rchagin, Georgiy Lobarev, Arseniy Terekhin, Sofiia Doro- gova, Amir Shamsutdinov, Yasin Mamedov, Bakhtiyar Khalfin, Bogdan Sheludko, Emil Zilyaev, Nikola Bani ´c, Georgy Perevozchikov, Radu Timofte, et al. NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge . In CVPR (CVPR) Workshops, 2026. 2

  61. [61]

    Efficient Low Light Image Enhancement: NTIRE 2026 Challenge Report

    Jiebin Yan, Chenyu Tu, Qinghua Lin, Zongwei WU, Weixia Zhang, Zhihua Wang, Peibei Cao, Yuming Fang, Xiaoning Liu, Zhuyun Zhou, Radu Timofte, et al. Efficient Low Light Image Enhancement: NTIRE 2026 Challenge Report . In CVPR (CVPR) Workshops, 2026. 2

  62. [62]

    NTIRE 2017 chal- lenge on single image super-resolution: Dataset and study

    Eirikur Agustsson and Radu Timofte. NTIRE 2017 chal- lenge on single image super-resolution: Dataset and study. InCVPR Workshops, pages 126–135, 2017. 2, 9, 12, 13

  63. [63]

    Lsdir: A large scale dataset for image restoration

    Yawei Li, Kai Zhang, Jingyun Liang, Jiezhang Cao, Ce Liu, Rui Gong, Yulun Zhang, Hao Tang, Yun Liu, Denis Deman- dolx, et al. Lsdir: A large scale dataset for image restoration. InCVPR Workshops, 2023. 2, 6, 9, 10, 11, 12, 13, 15, 18

  64. [64]

    The ninth ntire 2024 efficient super- resolution challenge report

    Bin Ren, Yawei Li, Nancy Mehta, Radu Timofte, Hongyuan Yu, Cheng Wan, Yuxin Hong, Bingnan Han, Zhuoyuan Wu, Yajun Zou, et al. The ninth ntire 2024 efficient super- resolution challenge report. InCVPR, pages 6595–6631,

  65. [65]

    The tenth ntire 2025 efficient super- resolution challenge report

    Bin Ren, Hang Guo, Lei Sun, Zongwei Wu, Radu Timo- fte, Yawei Li, et al. The tenth ntire 2025 efficient super- resolution challenge report. InCVPR, pages 917–966, 2025. 3, 7, 9, 16, 18

  66. [66]

    Swift parameter-free attention network for efficient super- resolution, 2024

    Cheng Wan, Hongyuan Yu, Zhiqi Li, Yihang Chen, Ya- jun Zou, Yuqing Liu, Xuanwu Yin, and Kunlong Zuo. Swift parameter-free attention network for efficient super- resolution, 2024. 7

  67. [67]

    Smfanet: A lightweight self-modulation feature aggregation network for efficient image super-resolution

    Mingjun Zheng, Long Sun, Jiangxin Dong, and Jinshan Pan. Smfanet: A lightweight self-modulation feature aggregation network for efficient image super-resolution. InECCV, 2024. 9, 18

  68. [68]

    NTIRE 2017 challenge on sin- gle image super-resolution: Methods and results

    Radu Timofte, Eirikur Agustsson, Luc Van Gool, Ming- Hsuan Yang, Lei Zhang, et al. NTIRE 2017 challenge on sin- gle image super-resolution: Methods and results. InCVPR Workshops, 2017. 10, 11, 15, 18

  69. [69]

    Swinir: Image restora- tion using swin transformer

    Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restora- tion using swin transformer. InICCVW, 2021. 10

  70. [70]

    Mamba: Linear-Time Sequence Modeling with Selective State Spaces

    Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces.arXiv preprint arXiv:2312.00752, 2023. 10

  71. [71]

    Effi- cient attention-sharing information distillation transformer for lightweight single image super-resolution

    Karam Park, Jae Woong Soh, and Nam Ik Cho. Effi- cient attention-sharing information distillation transformer for lightweight single image super-resolution. InAAAI, vol- ume 39, pages 6416–6424, 2025. 11

  72. [72]

    Large ker- nel modulation network for efficient image super-resolution

    Quanwei Hu, Yinggan Tang, and Xuguang Zhang. Large ker- nel modulation network for efficient image super-resolution. arXiv preprint arXiv:2508.11893, 2025. 11

  73. [73]

    Omni aggregation networks for lightweight im- age super-resolution

    Hang Wang, Xuanhong Chen, Bingbing Ni, Yutian Liu, and Jinfan Liu. Omni aggregation networks for lightweight im- age super-resolution. InCVPR, pages 22378–22387, 2023. 11

  74. [74]

    Activating more pixels in image super- resolution transformer

    Xiangyu Chen, Xintao Wang, Jiantao Zhou, Yu Qiao, and Chao Dong. Activating more pixels in image super- resolution transformer. InCVPR, pages 22367–22377, 2023. 11

  75. [75]

    Residual feature distil- lation network for lightweight image super-resolution, 2020

    Jie Liu, Jie Tang, and Gangshan Wu. Residual feature distil- lation network for lightweight image super-resolution, 2020. 12

  76. [76]

    Lsknet: Large selective kernel network for remote sensing object detection

    Yuxuan Li, Qibin Hou, and Z Zheng. Lsknet: Large selective kernel network for remote sensing object detection. InICCV, pages 4–6, 2023. 12

  77. [77]

    Ef- ficient single image super-resolution with entropy attention and receptive field augmentation

    Xiaole Zhao, Linze Li, Chengxing Xie, Xiaoming Zhang, Ting Jiang, Wenjie Lin, Shuaicheng Liu, and Tianrui Li. Ef- ficient single image super-resolution with entropy attention and receptive field augmentation. InACM MM, pages 1302– 1310, 2024. 12, 13

  78. [78]

    Lightweight image super-resolution with information multi- distillation network

    Zheng Hui, Xinbo Gao, Yunchu Yang, and Xiumei Wang. Lightweight image super-resolution with information multi- distillation network. InACM MM, pages 2024–2032, 2019. 13

  79. [79]

    Ntire 2017 challenge on sin- gle image super-resolution: Methods and results

    Radu Timofte, Eirikur Agustsson, Luc Van Gool, Ming- Hsuan Yang, and Lei Zhang. Ntire 2017 challenge on sin- gle image super-resolution: Methods and results. InCVPR workshops, pages 114–125, 2017. 15

  80. [80]

    Mambairv2: Attentive state space restoration

    Hang Guo, Yong Guo, Yaohua Zha, Yulun Zhang, Wenbo Li, Tao Dai, Shu-Tao Xia, and Yawei Li. Mambairv2: Attentive state space restoration. InCVPR, 2025. 15

Showing first 80 references.