Recognition: 2 theorem links
· Lean TheoremNTIRE 2026 3D Restoration and Reconstruction in Real-world Adverse Conditions: RealX3D Challenge Results
Pith reviewed 2026-05-13 16:51 UTC · model grok-4.3
The pith
Shared design principles from top entries advance 3D reconstruction in low-light and smoky scenes.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper establishes that robust 3D reconstruction pipelines can be achieved under adverse conditions by following shared design principles identified from the top-performing submissions in the RealX3D challenge, as demonstrated by their superior performance over state-of-the-art baselines on the benchmark dataset.
What carries the argument
The RealX3D benchmark dataset of real captured scenes under low-light and smoke degradation, which serves as the test platform to compare submissions and extract effective strategies for handling 3D scene degradation.
If this is right
- Top-performing methods achieve higher accuracy than prior baselines when reconstructing 3D scenes from degraded images.
- Common design principles can be directly adopted to strengthen existing 3D reconstruction systems for real environments.
- The challenge creates a public standard for measuring progress on 3D reconstruction under combined degradations.
- Insights from the submissions guide future work toward methods that handle multiple adverse factors at once.
Where Pith is reading between the lines
- The identified principles could be tested on dynamic scenes or video input to check if they extend beyond static images.
- Adding other common degradations such as rain or fog to future benchmarks would test how general the principles are.
- Integrating the successful strategies with depth sensors or multi-view data might produce even stronger real-world systems.
Load-bearing premise
The RealX3D benchmark together with the submitted methods gives a representative picture of real-world performance under adverse conditions.
What would settle it
A new method that scores high on RealX3D but performs poorly on a separate collection of real low-light and smoke scenes would show the benchmark does not fully represent the problem.
Figures
read the original abstract
This paper presents a comprehensive review of the NTIRE 2026 3D Restoration and Reconstruction (3DRR) Challenge, detailing the proposed methods and results. The challenge seeks to identify robust reconstruction pipelines that are robust under real-world adverse conditions, specifically extreme low-light and smoke-degraded environments, as captured by our RealX3D benchmark. A total of 279 participants registered for the competition, of whom 33 teams submitted valid results. We thoroughly evaluate the submitted approaches against state-of-the-art baselines, revealing significant progress in 3D reconstruction under adverse conditions. Our analysis highlights shared design principles among top-performing methods and provides insights into effective strategies for handling 3D scene degradation.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper reports results from the NTIRE 2026 3D Restoration and Reconstruction Challenge on the RealX3D benchmark for 3D scene reconstruction under extreme low-light and smoke-degraded conditions. It describes participation (279 registered, 33 valid submissions), evaluates submitted methods against state-of-the-art baselines, claims significant progress, and identifies shared design principles (e.g., fusion, denoising, depth priors) among top entries as effective strategies for adverse 3D reconstruction.
Significance. If the benchmark faithfully represents real-world degradations and the shared-principle analysis is supported by controlled evidence, the work supplies practical guidance for robust 3D pipelines in robotics, autonomous systems, and surveillance under uncontrolled adverse conditions.
major comments (3)
- [§4] §4 (Results and Evaluation): the claim of 'significant progress' over baselines is stated without accompanying quantitative metrics, error bars, statistical tests, or per-scene breakdowns; the abstract and summary sections alone do not allow verification of the magnitude or consistency of improvement.
- [§5] §5 (Analysis of Design Principles): the post-hoc categorization of shared strategies (fusion, denoising, depth priors) is presented descriptively; no ablation studies on RealX3D are reported that isolate each principle's contribution while controlling for benchmark-specific artifacts such as synthetic smoke density or fixed trajectories.
- [§3] §3 (RealX3D Benchmark): the capture protocol and degradation statistics are summarized but lack explicit comparison (e.g., KL divergence or perceptual metrics) to uncontrolled field data, leaving open the risk that top-method commonalities reflect benchmark idiosyncrasies rather than transferable robustness.
minor comments (2)
- [Table 1] Table 1 (participation statistics): add the exact number of test scenes and the train/validation/test split ratios for reproducibility.
- [Figure 3] Figure 3 (qualitative results): include failure cases alongside success examples to balance the visual assessment of top methods.
Simulated Author's Rebuttal
We thank the referee for the thorough and constructive feedback on our manuscript. We have carefully considered each major comment and provide point-by-point responses below, indicating where revisions will be made to strengthen the paper.
read point-by-point responses
-
Referee: [§4] §4 (Results and Evaluation): the claim of 'significant progress' over baselines is stated without accompanying quantitative metrics, error bars, statistical tests, or per-scene breakdowns; the abstract and summary sections alone do not allow verification of the magnitude or consistency of improvement.
Authors: We acknowledge that the high-level claims in the abstract require supporting details for verification. The full manuscript contains comprehensive tables in §4 reporting quantitative metrics such as PSNR, SSIM, and Chamfer distance for all 33 submissions against baselines. To fully address this concern, we will revise §4 to include error bars from multiple evaluation runs, statistical tests for significance, and per-scene performance breakdowns. This will substantiate the 'significant progress' claim with verifiable evidence. revision: yes
-
Referee: [§5] §5 (Analysis of Design Principles): the post-hoc categorization of shared strategies (fusion, denoising, depth priors) is presented descriptively; no ablation studies on RealX3D are reported that isolate each principle's contribution while controlling for benchmark-specific artifacts such as synthetic smoke density or fixed trajectories.
Authors: The categorization in §5 is based on a systematic review of the top-performing submissions' technical reports and their relative rankings on the RealX3D benchmark. Performing exhaustive ablations for each principle would necessitate re-implementing and evaluating numerous complex pipelines, which exceeds the typical scope of a challenge results paper. Nevertheless, we will add a new analysis subsection with targeted ablations on representative top methods, controlling for factors like smoke density and trajectory variations where possible, to provide more rigorous support for the identified principles. revision: partial
-
Referee: [§3] §3 (RealX3D Benchmark): the capture protocol and degradation statistics are summarized but lack explicit comparison (e.g., KL divergence or perceptual metrics) to uncontrolled field data, leaving open the risk that top-method commonalities reflect benchmark idiosyncrasies rather than transferable robustness.
Authors: RealX3D is constructed from real-world captures in low-light and smoke conditions with accurate ground-truth 3D data obtained via controlled setups. While we did not include direct distributional comparisons such as KL divergence to additional uncontrolled field datasets in the original submission, we will expand §3 with a discussion of the benchmark's alignment to real adverse conditions, including references to perceptual studies and domain adaptation literature. We note that acquiring and statistically comparing to new uncontrolled field data would require substantial additional resources and is not feasible for this revision cycle. revision: partial
Circularity Check
Empirical challenge report shows no circularity
full rationale
The paper is a competition results report that presents empirical evaluations of 33 submitted methods on the RealX3D benchmark, compares them to baselines, and offers descriptive observations on shared design principles. No mathematical derivations, first-principles predictions, fitted parameters renamed as outputs, or self-citation chains appear in the text. All claims rest on external submissions and measured performance metrics rather than internal definitions or reductions. This is a standard, self-contained empirical summary with no load-bearing circular steps.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Our analysis highlights shared design principles among top-performing methods... fusion-guided... ISP supervision... Naka correlation... global-residual decomposition
-
IndisputableMonolith/Foundation/RealityFromDistinction.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
RealX3D benchmark... 7 distinct scenes... PSNR and SSIM... average PSNR across all scenes
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Forward citations
Cited by 7 Pith papers
-
Dehaze-then-Splat: Generative Dehazing with Physics-Informed 3D Gaussian Splatting for Smoke-Free Novel View Synthesis
Dehaze-then-Splat uses per-frame generative dehazing followed by physics-regularized 3D Gaussian Splatting to achieve 20.98 dB PSNR and 0.683 SSIM on the Akikaze scene, a 1.5 dB gain over baseline by mitigating cross-...
-
3D Smoke Scene Reconstruction Guided by Vision Priors from Multimodal Large Language Models
A framework that combines MLLM-based image enhancement with a medium-aware 3D Gaussian Splatting model to reconstruct and render smoke scenes.
-
CLIP-Guided Data Augmentation for Night-Time Image Dehazing
CLIP-guided selection of external data plus staged NAFNet training and inference fusion provides an effective pipeline for nighttime image dehazing in the NTIRE 2026 challenge.
-
Training-Free Model Ensemble for Single-Image Super-Resolution via Strong-Branch Compensation
A dual-branch training-free ensemble fuses a hybrid attention network with a Mamba-based model via weighted combination to enhance super-resolution PSNR on DIV2K x4.
-
Dual-Branch Remote Sensing Infrared Image Super-Resolution
Dual-branch fusion of HAT-L and MambaIRv2-L with eight-way ensemble and equal-weight averaging outperforms single branches on PSNR, SSIM, and challenge score for infrared super-resolution.
-
SmokeGS-R: Physics-Guided Pseudo-Clean 3DGS for Real-World Multi-View Smoke Restoration
SmokeGS-R uses refined dark channel prior for pseudo-clean supervision to train 3DGS geometry, followed by ensemble-based appearance harmonization, achieving PSNR 15.21 and outperforming baselines on smoke restoration...
-
Beyond Model Design: Data-Centric Training and Self-Ensemble for Gaussian Color Image Denoising
Expanding training data diversity, adopting two-stage optimization, and applying geometric self-ensemble raises Restormer performance on Gaussian color denoising at sigma=50 by 3.366 dB PSNR on the NTIRE 2026 validation set.
Reference graph
Works this paper leans on
-
[1]
NTIRE 2026 nighttime image dehazing challenge report
Radu Ancuti, Alexandru Brateanu, Florin Vasluianu, Raul Balmez, Ciprian Orhei, Codruta Ancuti, Radu Timofte, Cos- min Ancuti, et al. NTIRE 2026 nighttime image dehazing challenge report. InProceedings of the Computer Vision and Pattern Recognition Conference Workshops, 2026. 2
work page 2026
-
[2]
Jie Cai, Kangning Yang, Zhiyuan Li, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 challenge on single image reflec- tion removal in the wild: Datasets, results, and methods. In Proceedings of the Computer Vision and Pattern Recognition Conference Workshops, 2026. 2
work page 2026
-
[3]
Retinexformer: One-stage retinex- based transformer for low-light image enhancement
Yuanhao Cai, Hao Bian, Jing Lin, Haoqian Wang, Radu Tim- ofte, and Yulun Zhang. Retinexformer: One-stage retinex- based transformer for low-light image enhancement. InPro- ceedings of the IEEE/CVF international conference on com- puter vision, pages 12504–12513, 2023. 3, 5, 6, 8
work page 2023
-
[4]
Qida Cao, Xinyuan Hu, Changyue Shi, Jiajun Ding, Zhou Yu, and Jun Yu. Gensmoke-gs: A multi-stage method for novel view synthesis from smoke-degraded images using a generative model.arXiv preprint arXiv:2604.03039, 2026. 2, 9, 10
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[5]
Beyond Model Design: Data-Centric Training and Self-Ensemble for Gaussian Color Image Denoising
Gengjia Chang, Xining Ge, Weijun Yuan, Zhan Li, Qiurong Song, Luen Zhu, and Shuhong Liu. Beyond model design: Data-centric training and self-ensemble for gaussian color image denoising.arXiv preprint arXiv:2604.11468, 2026. 1
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[6]
Training-Free Model Ensemble for Single-Image Super-Resolution via Strong-Branch Compensation
Gengjia Chang, Xining Ge, Weijun Yuan, Zhan Li, Qiurong Song, Luen Zhu, and Shuhong Liu. Training-free model en- semble for single-image super-resolution via strong-branch compensation.arXiv preprint arXiv:2604.11564, 2026. 1
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[7]
A Survey on 3D Gaussian Splatting
Guikun Chen and Wenguan Wang. A survey on 3d gaussian splatting.arXiv preprint arXiv:2401.03890, 2024. 1
work page internal anchor Pith review Pith/arXiv arXiv 2024
-
[8]
Xiaoxue Chen, Ziyi Xiong, Yuantao Chen, Gen Li, Nan Wang, Hongcheng Luo, Long Chen, Haiyang Sun, Bing Wang, Guang Chen, et al. Dggt: Feedforward 4d reconstruc- tion of dynamic driving scenes using unposed images.arXiv preprint arXiv:2512.03004, 2025. 1
-
[9]
Yuchao Chen and Hanqing Wang. Dehaze-then-splat: Gen- erative dehazing with physics-informed 3d gaussian splat- ting for smoke-free novel view synthesis.arXiv preprint arXiv:2604.13589, 2026. 2, 10
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[10]
Low light image enhancement challenge at NTIRE 2026
George Ciubotariu, Sharif S M A, Abdur Rehman, Fayaz Ali, Rizwan Ali Naqvi, Marcos Conde, Radu Timofte, et al. Low light image enhancement challenge at NTIRE 2026. In Proceedings of the Computer Vision and Pattern Recognition Conference Workshops, 2026. 2
work page 2026
-
[11]
High FPS video frame inter- polation challenge at NTIRE 2026
George Ciubotariu, Zhuyun Zhou, Yeying Jin, Zongwei Wu, Radu Timofte, et al. High FPS video frame inter- polation challenge at NTIRE 2026. InProceedings of the Computer Vision and Pattern Recognition Conference Work- shops, 2026. 2
work page 2026
-
[12]
Revitalizing convolutional network for image restoration
Yuning Cui, Wenqi Ren, Xiaochun Cao, and Alois Knoll. Revitalizing convolutional network for image restoration. IEEE Transactions on Pattern Analysis and Machine Intel- ligence, 46(12):9423–9438, 2024. 9
work page 2024
-
[13]
Eenet: An effective and efficient network for single image dehazing.pattern recognition, 158:111074,
Yuning Cui, Qiang Wang, Chaopeng Li, Wenqi Ren, and Alois Knoll. Eenet: An effective and efficient network for single image dehazing.pattern recognition, 158:111074,
-
[14]
Aleth-nerf: Illumination adaptive nerf with concealing field assumption
Ziteng Cui, Lin Gu, Xiao Sun, Xianzheng Ma, Yu Qiao, and Tatsuya Harada. Aleth-nerf: Illumination adaptive nerf with concealing field assumption. InProceedings of the AAAI conference on artificial intelligence, pages 1435–1444,
-
[15]
Ziteng Cui, Xuangeng Chu, and Tatsuya Harada. Luminance-gs: Adapting 3d gaussian splatting to chal- lenging lighting conditions with view-adaptive curve adjustment. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 26472–26482, 2025. 8
work page 2025
-
[16]
Ziteng Cui, Shuhong Liu, Xiaoyu Dong, Xuangeng Chu, Lin Gu, Ming-Hsuan Yang, and Tatsuya Harada. Unifying color and lightness correction with view-adaptive curve ad- justment for robust 3d novel view synthesis.arXiv preprint arXiv:2602.18322, 2026. 1
-
[17]
Dongrui Dai and Yuxiang Xing. Eap-gs: efficient augmenta- tion of pointcloud for 3d gaussian splatting in few-shot scene reconstruction. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 16498–16507, 2025. 6
work page 2025
-
[18]
Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al
Omar Elezabi, Marcos V . Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al. Photography retouching trans- fer, NTIRE 2026 challenge: Report. InProceedings of the Computer Vision and Pattern Recognition Conference Work- shops, 2026. 2
work page 2026
-
[19]
Ben Fei, Jingyi Xu, Rui Zhang, Qingyuan Zhou, Weidong Yang, and Ying He. 3d gaussian splatting as new era: A survey.IEEE Transactions on Visualization and Computer Graphics, 2024. 1
work page 2024
-
[20]
Darkir: Robust low-light image restoration
Daniel Feijoo, Juan C Benito, Alvaro Garcia, and Marcos V Conde. Darkir: Robust low-light image restoration. InPro- ceedings of the Computer Vision and Pattern Recognition Conference, pages 10879–10889, 2025. 5, 9
work page 2025
-
[21]
Efficient real-world deblurring using single images: AIM 2025 chal- lenge report
Daniel Feijoo, Paula Garrido, Marcos Conde, Jaesung Rim, Alvaro Garcia, Sunghyun Cho, Radu Timofte, et al. Efficient real-world deblurring using single images: AIM 2025 chal- lenge report. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2025. 2
work page 2025
-
[22]
SmokeGS-R: Physics-Guided Pseudo-Clean 3DGS for Real-World Multi-View Smoke Restoration
Xueming Fu and Lixia Han. Smokegs-r: Physics-guided pseudo-clean 3dgs for real-world multi-view smoke restora- tion.arXiv preprint arXiv:2604.05301, 2026. 2, 12
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[23]
Dual-Branch Remote Sensing Infrared Image Super-Resolution
Xining Ge, Gengjia Chang, Weijun Yuan, Zhan Li, Zhanglu Chen, Boyang Yao, Yihang Chen, Yifan Deng, and Shuhong Liu. Dual-branch remote sensing infrared image super- resolution.arXiv preprint arXiv:2604.10112, 2026. 1
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[24]
CLIP-Guided Data Augmentation for Night-Time Image Dehazing
Xining Ge, Weijun Yuan, Gengjia Chang, Xuyang Li, and Shuhong Liu. Clip-guided data augmentation for night-time image dehazing.arXiv preprint arXiv:2604.05500, 2026. 1
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[25]
Zero-reference deep curve estimation for low-light image enhancement
Chunle Guo, Chongyi Li, Jichang Guo, Chen Change Loy, Junhui Hou, Sam Kwong, and Runmin Cong. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF conference on computer vi- sion and pattern recognition, pages 1780–1789, 2020. 3
work page 2020
-
[26]
Reliability-aware staged low-light gaussian splatting.ResearchGate preprint, 2026
Haojie Guo and Ke Xian. Reliability-aware staged low-light gaussian splatting.ResearchGate preprint, 2026. 2, 7
work page 2026
-
[27]
NTIRE 2026 challenge on robust AI-generated image detection in the wild
Aleksandr Gushchin, Khaled Abud, Ekaterina Shumitskaya, Artem Filippov, Georgii Bychkov, Sergey Lavrushkin, Mikhail Erofeev, Anastasia Antsiferova, Changsheng Chen, Shunquan Tan, Radu Timofte, Dmitriy Vatolin, et al. NTIRE 2026 challenge on robust AI-generated image detection in the wild. InProceedings of the Computer Vision and Pattern Recognition Confer...
work page 2026
-
[28]
Florian Hahlbohm, Linus Franke, Martin Eisemann, and Marcus Magnor. Faster-gs: Analyzing and improv- ing gaussian splatting optimization.arXiv preprint arXiv:2602.09999, 2026. 10
-
[29]
Kaiming He, Jian Sun, and Xiaoou Tang. Single image haze removal using dark channel prior.IEEE transactions on pat- tern analysis and machine intelligence, 33(12):2341–2353,
-
[30]
Robust deepfake detec- tion, NTIRE 2026 challenge: Report
Benedikt Hopf, Radu Timofte, et al. Robust deepfake detec- tion, NTIRE 2026 challenge: Report. InProceedings of the Computer Vision and Pattern Recognition Conference Work- shops, 2026. 2
work page 2026
-
[31]
2d gaussian splatting for geometrically ac- curate radiance fields
Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, and Shenghua Gao. 2d gaussian splatting for geometrically ac- curate radiance fields. InACM SIGGRAPH 2024 conference papers, pages 1–11, 2024. 5
work page 2024
-
[32]
3d gaussian splatting for real-time radiance field rendering.ACM Trans
Bernhard Kerbl, Georgios Kopanas, Thomas Leimk ¨uhler, George Drettakis, et al. 3d gaussian splatting for real-time radiance field rendering.ACM Trans. Graph., 42(4):139–1,
-
[33]
Shakiba Kheradmand, Daniel Rebain, Gopal Sharma, Wei- wei Sun, Yang-Che Tseng, Hossam Isack, Abhishek Kar, Andrea Tagliasacchi, and Kwang Moo Yi. 3d gaussian splat- ting as markov chain monte carlo.Advances in Neural Infor- mation Processing Systems, 37:80965–80986, 2024. 10
work page 2024
-
[34]
Efficient diffusion as low light enhancer
Guanzhou Lan, Qianli Ma, Yuqi Yang, Zhigang Wang, Dong Wang, Xuelong Li, and Bin Zhao. Efficient diffusion as low light enhancer. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 21277–21286, 2025. 3
work page 2025
-
[35]
Seathru- nerf: Neural radiance fields in scattering media
Deborah Levy, Amit Peleg, Naama Pearl, Dan Rosenbaum, Derya Akkaynak, Simon Korman, and Tali Treibitz. Seathru- nerf: Neural radiance fields in scattering media. InProceed- ings of the IEEE/CVF conference on computer vision and pattern recognition, pages 56–65, 2023. 1
work page 2023
-
[36]
Mair: A locality- and continuity- preserving mamba for image restoration
Boyun Li, Haiyu Zhao, Wenxin Wang, Peng Hu, Yuan- biao Gou, and Xi Peng. Mair: A locality- and continuity- preserving mamba for image restoration. InIEEE Con- ference on Computer Vision and Pattern Recognition, Nashville, TN, 2025. 14
work page 2025
-
[37]
Chongyi Li, Chunle Guo, and Chen Change Loy. Learning to enhance low-light image via zero-reference deep curve esti- mation.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021. 4
work page 2021
-
[38]
Watersplatting: Fast underwater 3d scene reconstruction using gaussian splatting
Huapeng Li, Wenxuan Song, Tianao Xu, Alexandre Elsig, and Jonas Kulhanek. Watersplatting: Fast underwater 3d scene reconstruction using gaussian splatting. In2025 In- ternational Conference on 3D Vision (3DV), pages 969–978. IEEE, 2025. 1, 13, 14
work page 2025
-
[39]
Jiatong Li, Zheng Chen, Kai Liu, Jingkai Wang, Zihan Zhou, Xiaoyang Liu, Libo Zhu, Radu Timofte, Yulun Zhang, et al. The first challenge on mobile real-world image super- resolution at NTIRE 2026: Benchmark results and method overview. InProceedings of the Computer Vision and Pat- tern Recognition Conference Workshops, 2026. 2
work page 2026
-
[40]
Mingrui Li, Shuhong Liu, Tianchen Deng, and Hongyu Wang. Densesplat: Densifying gaussian splatting slam with neural radiance prior.IEEE Transactions on Visualization & Computer Graphics, (01):1–14, 2025. 1
work page 2025
-
[41]
Sgs-slam: Se- mantic gaussian splatting for neural dense slam
Mingrui Li, Shuhong Liu, Heng Zhou, Guohao Zhu, Na Cheng, Tianchen Deng, and Hongyu Wang. Sgs-slam: Se- mantic gaussian splatting for neural dense slam. InEuropean Conference on Computer Vision, pages 163–179, 2025. 1
work page 2025
-
[42]
Xin Li, Jiachao Gong, Xijun Wang, Shiyao Xiong, Bingchen Li, Suhang Yao, Chao Zhou, Zhibo Chen, Radu Timofte, et al. NTIRE 2026 challenge on short-form UGC video restoration in the wild with generative models: Datasets, methods and results. InProceedings of the Computer Vi- sion and Pattern Recognition Conference Workshops, 2026. 2
work page 2026
-
[43]
Xin Li, Yeying Jin, Suhang Yao, Beibei Lin, Zhaoxin Fan, Wending Yan, Xin Jin, Zongwei Wu, Bingchen Li, Peishu Shi, Yufei Yang, Yu Li, Zhibo Chen, Bihan Wen, Robby Tan, Radu Timofte, et al. NTIRE 2026 the second challenge on day and night raindrop removal for dual-focused images: Methods and results. InProceedings of the Computer Vision and Pattern Recogn...
work page 2026
-
[44]
Depth Anything 3: Recovering the Visual Space from Any Views
Haotong Lin, Sili Chen, Jun Hao Liew, Donny Y . Chen, Zhenyu Li, Guang Shi, Jiashi Feng, and Bingyi Kang. Depth anything 3: Recovering the visual space from any views. arXiv preprint arXiv:2511.10647, 2025. 4, 9, 14
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[45]
Kai Liu, Haoyang Yue, Zeli Lin, Zheng Chen, Jingkai Wang, Jue Gong, Radu Timofte, Yulun Zhang, et al. The first chal- lenge on remote sensing infrared image super-resolution at NTIRE 2026: Benchmark results and method overview. In Proceedings of the Computer Vision and Pattern Recognition Conference Workshops, 2026. 2
work page 2026
-
[46]
Shuhong Liu, Xiang Chen, Hongming Chen, Quanfeng Xu, and Mingrui Li. Deraings: Gaussian splatting for enhanced scene reconstruction in rainy environments.Proceedings of the AAAI Conference on Artificial Intelligence, 39(5):5558– 5566, 2025. 1
work page 2025
-
[47]
Shuhong Liu, Tianchen Deng, Heng Zhou, Liuzhuozheng Li, Hongyu Wang, Danwei Wang, and Mingrui Li. Mg-slam: Structure gaussian splatting slam with manhattan world hy- pothesis.IEEE Transactions on Automation Science and En- gineering, 22:17034–17049, 2025. 1
work page 2025
-
[48]
I2-nerf: Learning neural radiance fields un- der physically-grounded media interactions
Shuhong Liu, Lin Gu, Ziteng Cui, Xuangeng Chu, and Tat- suya Harada. I2-nerf: Learning neural radiance fields un- der physically-grounded media interactions. InAdvances in Neural Information Processing Systems, 2025. 1, 9
work page 2025
-
[49]
Shuhong Liu, Chenyu Bao, Ziteng Cui, Yun Liu, Xuangeng Chu, Lin Gu, Marcos V Conde, Ryo Umagami, Tomohiro Hashimoto, Zijian Hu, et al. Realx3d: A physically-degraded 3d benchmark for multi-view visual restoration and recon- struction.arXiv preprint arXiv:2512.23437, 2026. 1, 2, 3, 5
-
[50]
Shuhong Liu, Xining Ge, Ziying Gu, Lin Gu, Ziteng Cui, Xuangeng Chu, Jun Liu, Dong Li, and Tatsuya Harada. De- noising the deep sky: Physics-based ccd noise formation for astronomical imaging.arXiv preprint arXiv:2601.23276,
-
[51]
NTIRE 2026 X- AIGC quality assessment challenge: Methods and results
Xiaohong Liu, Xiongkuo Min, Guangtao Zhai, Qiang Hu, Jiezhang Cao, Yu Zhou, Wei Sun, Farong Wen, Zitong Xu, Yingjie Zhou, Huiyu Duan, Lu Liu, Jiarui Wang, Siqi Luo, Chunyi Li, Li Xu, Zicheng Zhang, Yue Shi, Yubo Wang, Minghong Zhang, Chunchao Guo, Zhichao Hu, Mingtao Chen, Xiele Wu, Xin Ma, Zhaohe Lv, Yuanhao Xue, Jiaqi Wang, Xinxing Sha, Radu Timofte, et...
work page 2026
-
[52]
Yun Liu, Tao Li, Chunping Tan, Wenqi Ren, Cosmin Ancuti, and Weisi Lin. Ihdcp: Single image dehazing using inverted haze density correction prior.IEEE Transactions on Image Processing, 2026. 11
work page 2026
-
[53]
Yuhao Liu, Dingju Wang, and Ziyang Zheng. Elog-gs: Dual- branch gaussian splatting with luminance-guided enhance- ment for extreme low-light 3d reconstruction.arXiv preprint arXiv:2604.12592, 2026. 2, 6
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[54]
Scaffold-gs: Structured 3d gaussians for view-adaptive rendering
Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, and Bo Dai. Scaffold-gs: Structured 3d gaussians for view-adaptive rendering. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20654–20664, 2024. 14
work page 2024
-
[55]
Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view syn- thesis.Communications of the ACM, 65(1):99–106, 2021. 1
work page 2021
-
[56]
NTIRE 2026 challenge on video saliency pre- diction: Methods and results
Andrey Moskalenko, Alexey Bryncev, Ivan Kosmynin, Kira Shilovskaya, Mikhail Erofeev, Dmitry Vatolin, Radu Tim- ofte, et al. NTIRE 2026 challenge on video saliency pre- diction: Methods and results. InProceedings of the Com- puter Vision and Pattern Recognition Conference Work- shops, 2026. 2
work page 2026
-
[57]
Introducing gpt image 1.5.https://openai
OpenAI. Introducing gpt image 1.5.https://openai. com / index / new - chatgpt - images - is - here/,
-
[58]
NTIRE 2026 challenge on learned smartphone ISP with unpaired data: Methods and results
Georgy Perevozchikov, Daniil Vladimirov, Radu Timofte, et al. NTIRE 2026 challenge on learned smartphone ISP with unpaired data: Methods and results. InProceedings of the Computer Vision and Pattern Recognition Conference Workshops, 2026. 2
work page 2026
-
[59]
Guanyi Qin, Jie Liang, Bingbing Zhang, Lishen Qu, Ya-nan Guan, Hui Zeng, Lei Zhang, Radu Timofte, et al. NTIRE 2026 the 3rd restore any image model (RAIM) challenge: Professional image quality assessment (track 1). InProceed- ings of the Computer Vision and Pattern Recognition Con- ference Workshops, 2026. 2
work page 2026
-
[60]
The second challenge on cross-domain few-shot object detection at NTIRE 2026: Methods and results
Xingyu Qiu, Yuqian Fu, Jiawei Geng, Bin Ren, Jiancheng Pan, Zongwei Wu, Hao Tang, Yanwei Fu, Radu Timofte, Nicu Sebe, Mohamed Elhoseiny, et al. The second challenge on cross-domain few-shot object detection at NTIRE 2026: Methods and results. InProceedings of the Computer Vision and Pattern Recognition Conference Workshops, 2026. 2
work page 2026
-
[61]
Mb-taylorformer: Multi-branch efficient transformer expanded by taylor formula for image dehazing
Yuwei Qiu, Kaihao Zhang, Chenxi Wang, Wenhan Luo, Hongdong Li, and Zhi Jin. Mb-taylorformer: Multi-branch efficient transformer expanded by taylor formula for image dehazing. InProceedings of the IEEE/CVF international conference on computer vision, pages 12802–12813, 2023. 11, 14
work page 2023
-
[62]
The eleventh ntire 2026 efficient super-resolution challenge re- port
Bin Ren, Hang Guo, Yan Shu, Jiaqi Ma, Ziteng Cui, Shuhong Liu, Guofeng Mei, Lei Sun, Zongwei Wu, Fahad Shahbaz Khan, Salman Khan, Radu Timofte, Yawei Li, et al. The eleventh ntire 2026 efficient super-resolution challenge re- port. InProceedings of the Computer Vision and Pattern Recognition Conference Workshops, 2026. 2
work page 2026
-
[63]
U- net: Convolutional networks for biomedical image segmen- tation
Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U- net: Convolutional networks for biomedical image segmen- tation. InInternational Conference on Medical image com- puting and computer-assisted intervention, pages 234–241. Springer, 2015. 14
work page 2015
-
[64]
Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al
Tim Seizinger, Florin-Alexandru Vasluianu, Marcos V . Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. The first controllable bokeh rendering chal- lenge at NTIRE 2026. InProceedings of the Computer Vision and Pattern Recognition Conference Workshops, 2026. 2
work page 2026
-
[65]
Yuda Song, Zhuqing He, Hui Qian, and Xin Du. Vision transformers for single image dehazing.IEEE Transactions on Image Processing, 32:1927–1941, 2023. 14
work page 1927
-
[66]
The third challenge on image denoising at NTIRE 2026: Methods and results
Lei Sun, Hang Guo, Bin Ren, Shaolin Su, Xian Wang, Danda Pani Paudel, Luc Van Gool, Radu Timofte, Yawei Li, et al. The third challenge on image denoising at NTIRE 2026: Methods and results. InProceedings of the Com- puter Vision and Pattern Recognition Conference Work- shops, 2026. 2
work page 2026
-
[67]
The second challenge on event-based im- age deblurring at NTIRE 2026: Methods and results
Lei Sun, Weilun Li, Xian Wang, Zhendong Li, Letian Shi, Dannong Xu, Deheng Zhang, Mengshun Hu, Shuang Guo, Shaolin Su, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. The second challenge on event-based im- age deblurring at NTIRE 2026: Methods and results. In Proceedings of the Computer Vision and Pattern Recognition Conference Workshops, 2026. 2
work page 2026
-
[68]
NTIRE 2026 the first challenge on blind computational aberration correction: Methods and results
Lei Sun, Xiaolong Qian, Qi Jiang, Xian Wang, Yao Gao, Kailun Yang, Kaiwei Wang, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. NTIRE 2026 the first challenge on blind computational aberration correction: Methods and results. InProceedings of the Computer Vision and Pattern Recognition Conference Workshops, 2026. 2
work page 2026
-
[69]
Learning- based ambient lighting normalization: NTIRE 2026 chal- lenge results and findings
Florin-Alexandru Vasluianu, Tim Seizinger, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Learning- based ambient lighting normalization: NTIRE 2026 chal- lenge results and findings. InProceedings of the Com- puter Vision and Pattern Recognition Conference Work- shops, 2026. 2
work page 2026
-
[70]
Advances in single-image shadow removal: Results from the NTIRE 2026 challenge
Florin-Alexandru Vasluianu, Tim Seizinger, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Advances in single-image shadow removal: Results from the NTIRE 2026 challenge. InProceedings of the Computer Vision and Pattern Recogni- tion Conference Workshops, 2026. 2
work page 2026
-
[71]
Vggt: Vi- sual geometry grounded transformer
Jianyuan Wang, Minghao Chen, Nikita Karaev, Andrea Vedaldi, Christian Rupprecht, and David Novotny. Vggt: Vi- sual geometry grounded transformer. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 5294–5306, 2025. 5, 6
work page 2025
-
[72]
The second challenge on real-world face restoration at NTIRE 2026: Methods and results
Jingkai Wang, Jue Gong, Zheng Chen, Kai Liu, Jiatong Li, Yulun Zhang, Radu Timofte, et al. The second challenge on real-world face restoration at NTIRE 2026: Methods and results. InProceedings of the Computer Vision and Pattern Recognition Conference Workshops, 2026. 2
work page 2026
-
[73]
NTIRE 2026 challenge on 3D content super-resolution: Methods and results
Longguang Wang, Yulan Guo, Yingqian Wang, Juncheng Li, Sida Peng, Ye Zhang, Radu Timofte, Minglin Chen, Yi Wang, Qibin Hu, Wenjie Lei, et al. NTIRE 2026 challenge on 3D content super-resolution: Methods and results. In Proceedings of the Computer Vision and Pattern Recognition Conference Workshops, 2026. 2
work page 2026
-
[74]
MoGe-2: Accurate Monocular Geometry with Metric Scale and Sharp Details
Ruicheng Wang, Sicheng Xu, Yue Dong, Yu Deng, Jianfeng Xiang, Zelong Lv, Guangzhong Sun, Xin Tong, and Jiaolong Yang. Moge-2: Accurate monocular geometry with metric scale and sharp details.arXiv preprint arXiv:2507.02546,
work page internal anchor Pith review arXiv
-
[75]
NTIRE 2026 challenge on light field image super-resolution: Methods and results
Yingqian Wang, Zhengyu Liang, Fengyuan Zhang, Wend- ing Zhao, Longguang Wang, Juncheng Li, Jungang Yang, Radu Timofte, Yulan Guo, et al. NTIRE 2026 challenge on light field image super-resolution: Methods and results. In Proceedings of the Computer Vision and Pattern Recognition Conference Workshops, 2026. 2
work page 2026
-
[76]
Difix3d+: Improving 3d reconstruc- tions with single-step diffusion models
Jay Zhangjie Wu, Yuxuan Zhang, Haithem Turki, Xuanchi Ren, Jun Gao, Mike Zheng Shou, Sanja Fidler, Zan Goj- cic, and Huan Ling. Difix3d+: Improving 3d reconstruc- tions with single-step diffusion models. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 26024–26035, 2025. 11
work page 2025
-
[77]
Efficient low light image enhancement: NTIRE 2026 challenge report
Jiebin Yan, Chenyu Tu, Qinghua Lin, Zongwei Wu, Weixia Zhang, Zhihua Wang, Peibei Cao, Yuming Fang, Xiaoning Liu, Zhuyun Zhou, Radu Timofte, et al. Efficient low light image enhancement: NTIRE 2026 challenge report. InPro- ceedings of the Computer Vision and Pattern Recognition Conference Workshops, 2026. 2
work page 2026
-
[78]
Hvi: A new color space for low-light image enhancement
Qingsen Yan, Yixu Feng, Cheng Zhang, Guansong Pang, Kangbiao Shi, Peng Wu, Wei Dong, Jinqiu Sun, and Yan- ning Zhang. Hvi: A new color space for low-light image enhancement. InProceedings of the computer vision and pattern recognition conference, pages 5678–5687, 2025. 3, 4, 5, 9
work page 2025
-
[79]
Street gaussians: Modeling dynamic urban scenes with gaussian splatting
Yunzhi Yan, Haotong Lin, Chenxu Zhou, Weijie Wang, Haiyang Sun, Kun Zhan, Xianpeng Lang, Xiaowei Zhou, and Sida Peng. Street gaussians: Modeling dynamic urban scenes with gaussian splatting. InEuropean Conference on Computer Vision, pages 156–173. Springer, 2024. 1
work page 2024
-
[80]
Lihe Yang, Bingyi Kang, Zilong Huang, Zhen Zhao, Xiao- gang Xu, Jiashi Feng, and Hengshuang Zhao. Depth any- thing v2.arXiv:2406.09414, 2024. 11, 13, 14
work page internal anchor Pith review Pith/arXiv arXiv 2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.