Recognition: unknown
CV-HoloSR: Hologram to hologram super-resolution through volume-upsampling three-dimensional scenes
Pith reviewed 2026-05-10 16:38 UTC · model grok-4.3
The pith
CV-HoloSR performs hologram super-resolution for volumetric upsampling while preserving linear depth scaling in 3D scenes.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
CV-HoloSR is a complex-valued hologram super-resolution framework built on a Complex-Valued Residual Dense Network and optimized with a depth-aware perceptual reconstruction loss; it preserves physically consistent linear depth scaling during volume up-sampling, recovers sharp high-frequency interference patterns, and adapts to unseen depth ranges and display configurations through complex-valued Low-Rank Adaptation.
What carries the argument
Complex-Valued Residual Dense Network (CV-RDN) with depth-aware perceptual loss, which processes complex-valued hologram data to suppress over-smoothing and quadratic depth distortion.
If this is right
- Delivers 32 percent better perceptual realism (LPIPS 0.2001) than prior baselines.
- Adapts a pre-trained backbone to new depth ranges and display setups with only 200 samples.
- Cuts training time by more than 75 percent, from 22.5 hours to 5.2 hours.
- Supports datasets covering large depth ranges at resolutions up to 4K.
- Recovers high-frequency interference patterns without over-smoothing.
Where Pith is reading between the lines
- The same complex-valued backbone could be tested on other wave-based imaging tasks such as radar or acoustic holography.
- If inference speed is further optimized, the method might support real-time upsampling for live holographic video.
- Scaling the approach to even larger target volumes would test whether the linear-depth property holds without additional regularization.
- The large-depth-range dataset introduced here could serve as a shared benchmark for future holographic upsampling work.
Load-bearing premise
Complex-valued operations together with the depth-aware loss are enough to remove quadratic depth distortion and produce physically consistent linear depth scaling.
What would settle it
Real optical reconstructions in which the measured focal planes deviate from the expected linear depth positions after volume upsampling.
Figures
read the original abstract
Existing hologram super-resolution (HSR) methods primarily focus on angle-of-view expansion. Adapting them for volumetric spatial up-sampling introduces severe quadratic depth distortion, degrading 3D focal accuracy. We propose CV-HoloSR, a complex-valued HSR framework specifically designed to preserve physically consistent linear depth scaling during volume up-sampling. Built upon a Complex-Valued Residual Dense Network (CV-RDN) and optimized with a novel depth-aware perceptual reconstruction loss, our model effectively suppresses over-smoothing to recover sharp, high-frequency interference patterns. To support this, we introduce a comprehensive large-depth-range dataset with resolutions up to 4K. Furthermore, to overcome the inherent depth bias of pre-trained encoders when scaling to massive target volumes, we integrate a parameter-efficient fine-tuning strategy utilizing complex-valued Low-Rank Adaptation (LoRA). Extensive numerical and physical optical experiments demonstrate our method's superiority. CV-HoloSR achieves a 32% improvement in perceptual realism (LPIPS of 0.2001) over state-of-the-art baselines. Additionally, our tailored LoRA strategy requires merely 200 samples, reducing training time by over 75% (from 22.5 to 5.2 hours) while successfully adapting the pre-trained backbone to unseen depth ranges and novel display configurations.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces CV-HoloSR, a complex-valued framework for hologram super-resolution focused on volumetric spatial up-sampling of 3D scenes. It proposes a Complex-Valued Residual Dense Network (CV-RDN) trained with a depth-aware perceptual reconstruction loss to suppress quadratic depth distortion and over-smoothing, a new large-depth-range dataset up to 4K resolution, and complex-valued LoRA for parameter-efficient adaptation to unseen depths and display configurations. The central claims are a 32% LPIPS improvement (to 0.2001) over state-of-the-art baselines plus over 75% training-time reduction (to 5.2 hours with 200 samples), validated via numerical and physical optical experiments.
Significance. If the physical-consistency claims hold, the work could advance holographic 3D displays by enabling accurate high-resolution volumetric reconstructions without depth warping. The parameter-efficient LoRA adaptation and new dataset are practical strengths that could support reproducible follow-on research in computer graphics and optics.
major comments (2)
- [Abstract and experimental results] Abstract and experimental results: the central claim that CV-RDN plus the depth-aware loss 'preserves physically consistent linear depth scaling' and 'suppresses quadratic depth distortion' in real optical experiments lacks any reported quantitative metric for depth fidelity (e.g., measured-vs-target depth slope, R² of linearity, focal-plane error, or residual quadratic term). Only LPIPS is provided, which addresses perceptual quality rather than geometric accuracy and therefore does not directly substantiate the load-bearing physical-consistency assertion.
- [Experimental evaluation] Experimental evaluation: insufficient detail is given on baseline implementations, dataset construction (size, depth-range sampling, hologram generation method), error bars, and ablation studies isolating the contribution of complex-valued operations versus the depth-aware loss. These omissions make it impossible to verify the reported 32% LPIPS gain or the LoRA efficiency claims under controlled conditions.
minor comments (2)
- [Method] Notation for complex-valued operations and the precise formulation of the depth-aware loss should be clarified with explicit equations to aid reproducibility.
- [Figures] Figure captions and axis labels in the optical reconstruction results could be expanded to indicate the exact depth ranges and display parameters tested.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed feedback. We address each major comment below and indicate the revisions planned for the manuscript.
read point-by-point responses
-
Referee: [Abstract and experimental results] Abstract and experimental results: the central claim that CV-RDN plus the depth-aware loss 'preserves physically consistent linear depth scaling' and 'suppresses quadratic depth distortion' in real optical experiments lacks any reported quantitative metric for depth fidelity (e.g., measured-vs-target depth slope, R² of linearity, focal-plane error, or residual quadratic term). Only LPIPS is provided, which addresses perceptual quality rather than geometric accuracy and therefore does not directly substantiate the load-bearing physical-consistency assertion.
Authors: We acknowledge that the manuscript relies on visual inspection of focused reconstructions in the physical experiments to support claims of linear depth scaling and suppression of quadratic distortion, without providing explicit quantitative depth-fidelity metrics such as slope, R², or focal-plane error. LPIPS was selected to quantify perceptual improvements in hologram quality, but we agree it does not directly measure geometric accuracy. In the revised version we will add quantitative depth analysis from the optical setup, including measured-versus-target depth slopes and linearity statistics computed across multiple focal planes. revision: yes
-
Referee: [Experimental evaluation] Experimental evaluation: insufficient detail is given on baseline implementations, dataset construction (size, depth-range sampling, hologram generation method), error bars, and ablation studies isolating the contribution of complex-valued operations versus the depth-aware loss. These omissions make it impossible to verify the reported 32% LPIPS gain or the LoRA efficiency claims under controlled conditions.
Authors: We agree that additional implementation and evaluation details are required for reproducibility and verification of the reported gains. The revised manuscript will expand the experimental section to specify: (i) exact adaptations made to baseline HSR methods for volumetric up-sampling, (ii) dataset size, depth-range sampling procedure, and hologram generation parameters (angular spectrum method with given wavelength and pixel pitch), (iii) error bars computed over multiple independent runs, and (iv) ablation tables that isolate complex-valued operations from the depth-aware loss. These additions will allow direct verification of the LPIPS improvement and LoRA training-time reduction. revision: yes
Circularity Check
No circularity: derivation relies on new architecture, loss, and data-driven evaluation
full rationale
The paper proposes CV-RDN with a depth-aware perceptual loss and LoRA adaptation, trained on a new large-depth-range dataset, then reports LPIPS gains and training-time reductions from numerical and optical experiments. No load-bearing step reduces a claimed result to a fitted parameter, self-citation chain, or input by construction; the central claims rest on empirical metrics rather than algebraic equivalence to the method's own definitions.
Axiom & Free-Parameter Ledger
free parameters (2)
- CV-RDN network weights
- Complex LoRA adaptation parameters
axioms (2)
- domain assumption Complex-valued representations preserve the phase and interference patterns required for physically accurate holograms
- domain assumption The depth-aware perceptual reconstruction loss correctly penalizes deviations from linear depth scaling
invented entities (3)
-
CV-RDN (Complex-Valued Residual Dense Network)
no independent evidence
-
Depth-aware perceptual reconstruction loss
no independent evidence
-
Complex-valued LoRA
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Slinger, C
C. Slinger, C. Cameron, M. Stanley, Computer-generated holography as a generic display technology, Computer 38 (8) (2005) 46–53
2005
-
[3]
H. G. Kim, Y. Man Ro, Ultrafast layer based computer-generated holo- gram calculation with sparse template holographic fringe pattern for 3-d object, Optics Express 25 (24) (2017) 30418–30427
2017
-
[4]
P. Su, W. Cao, J. Ma, B. Cheng, X. Liang, L. Cao, G. Jin, Fast computer-generated hologram generation method for three-dimensional point cloud model, Journal of Display Technology 12 (12) (2016) 1688– 1694
2016
-
[5]
Maimone, A
A. Maimone, A. Georgiou, J. S. Kollin, Holographic near-eye displays for virtual and augmented reality, ACM Transactions on Graphics (Tog) 36 (4) (2017) 1–16. 26
2017
-
[6]
Symeonidou, D
A. Symeonidou, D. Blinder, P. Schelkens, Colour computer-generated holography for point clouds utilizing the phong illumination model, Op- tics express 26 (8) (2018) 10282–10298
2018
-
[7]
Ju, J.-H
M.Askari, S.-B.Kim, K.-S.Shin, S.-B.Ko, S.-H.Kim, D.-Y.Park, Y.-G. Ju, J.-H. Park, Occlusion handling using angular spectrum convolution in fully analytical mesh based computer generated hologram, Optics Express 25 (21) (2017) 25867–25878
2017
-
[8]
Ko, J.-H
S.-B. Ko, J.-H. Park, Speckle reduction using angular spectrum inter- leaving for triangular mesh based computer generated hologram, Optics Express 25 (24) (2017) 29788–29797
2017
-
[9]
H.-J. Yeom, S. Cheon, K. Choi, J. Park, Efficient mesh-based realistic computer-generated hologram synthesis with polygon resolution adjust- ment, ETRI Journal 44 (1) (2022) 85–93
2022
-
[10]
J.-H. Park, M. Askari, Non-hogel-based computer generated hologram from light field using complex field recovery technique from wigner dis- tribution function, Optics express 27 (3) (2019) 2562–2574
2019
-
[11]
Park, J.-H
D.-Y. Park, J.-H. Park, Hologram conversion for speckle free reconstruc- tion using light field extraction and deep learning, Optics Express 28 (4) (2020) 5393–5409
2020
-
[12]
Y. Zhao, L. Cao, H. Zhang, D. Kong, G. Jin, Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method, Optics express 23 (20) (2015) 25440–25449
2015
-
[13]
Z. Wang, G. Lv, Q. Feng, A. Wang, H. Ming, Simple and fast calculation algorithm for computer-generated hologram based on integral imaging using look-up table, Optics express 26 (10) (2018) 13322–13330
2018
-
[14]
J. Jia, Y. Wang, J. Liu, X. Li, Y. Pan, Z. Sun, B. Zhang, Q. Zhao, W. Jiang, Reducing the memory usage for effectivecomputer-generated hologram calculation using compressed look-up table in full-color holo- graphic display, Applied optics 52 (7) (2013) 1404–1412
2013
-
[15]
Shimobaba, T
T. Shimobaba, T. Takahashi, Y. Yamamoto, T. Nishitsuji, A. Shiraki, N. Hoshikawa, T. Kakue, T. Ito, Efficient diffraction calculations using implicit convolution, OSA Continuum 1 (2) (2018) 642–650. 27
2018
-
[16]
Shimobaba, T
T. Shimobaba, T. Ito, Computer Holography: Acceleration Algorithms and Hardware Implementations, CRC press, 2019
2019
-
[17]
Blinder, T
D. Blinder, T. Shimobaba, Efficient algorithms for the accurate propa- gation of extreme-resolution holograms, Optics Express 27 (21) (2019) 29905–29915
2019
-
[18]
J. Lee, H. Kang, H.-j. Yeom, S. Cheon, J. Park, D. Kim, Out-of-core gpu 2d-shift-fft algorithm for ultra-high-resolution hologram generation, Optics Express 29 (12) (2021) 19094–19112
2021
-
[19]
J. Lee, D. Kim, Out-of-core diffraction algorithm using multiple ssds for ultra-high-resolution hologram generation, Optics Express 31 (18) (2023) 28683–28700
2023
-
[20]
J. Lee, D. Kim, Combo: compressed block-wise out-of-core diffraction computation for tera-scale holography, Optics Express 32 (27) (2024) 47993–48008
2024
-
[21]
Y. Peng, S. Choi, N. Padmanaban, G. Wetzstein, Neural hologra- phy with camera-in-the-loop training, ACM Transactions on Graphics (TOG) 39 (6) (2020) 1–14
2020
-
[22]
Ishii, F
Y. Ishii, F. Wang, H. Shiomi, T. Kakue, T. Ito, T. Shimobaba, Multi- depth hologram generation from two-dimensional images by deep learn- ing, Optics and Lasers in Engineering 170 (2023) 107758
2023
-
[23]
Kim, P.Kellnhofer, W.Matusik, Towardsreal-timepho- torealistic 3d holography with deep neural networks, Nature 591 (7849) (2021) 234–239
L.Shi, B.Li, C. Kim, P.Kellnhofer, W.Matusik, Towardsreal-timepho- torealistic 3d holography with deep neural networks, Nature 591 (7849) (2021) 234–239
2021
-
[24]
T. Yang, Z. Lu, Holo-u2net for high-fidelity 3d hologram generation, Sensors 24 (17) (2024) 5505
2024
-
[25]
Q. Fang, H. Zheng, X. Xia, J. Peng, T. Zhang, X. Lin, Y. Yu, Diffraction model-driven neural network with semi-supervised training strategy for real-world 3d holographic photography, Optics Express 32 (26) (2024) 45406–45420
2024
-
[26]
Y. Endo, M. Oikawa, T. D. Wilkinson, T. Shimobaba, T. Ito, Quantized neural network for complex hologram generation, Applied Optics 64 (5) (2024) A12–A18. 28
2024
-
[27]
Zhang, D
Y. Zhang, D. Cheng, Y. Wang, Y. Wang, Y. Shan, T. Yang, Y. Wang, Real-time multi-depth holographic display using complex-valued neural network, Optics Express 33 (4) (2025) 7380–7395
2025
-
[28]
M. Jee, H. Kim, M. Yoon, C. Kim, Hologram super-resolution using dual-generator gan, in: 2022 IEEE International Conference on Image Processing (ICIP), IEEE, 2022, pp. 2596–2600
2022
-
[29]
Lee, S.-W
S. Lee, S.-W. Nam, J. Lee, Y. Jeong, B. Lee, Holosr: deep learning- based super-resolution for real-time high-resolution computer-generated holograms, Optics Express 32 (7) (2024) 11107–11122
2024
-
[30]
Y. No, J. Lee, H. Yeom, S. Kwon, D. Kim, H2hsr: Hologram-to- hologram super-resolution with deep neural network, IEEE Access 12 (2024) 90900–90914
2024
-
[31]
Park, J.-H
D.-Y. Park, J.-H. Park, Generation of distortion-free scaled holograms using light field data conversion, Optics Express 29 (1) (2020) 487–508
2020
-
[32]
H. Chen, C. Cao, P. He, Y. Xiong, T. Qi, D. Li, Z. Gaopeng, C. Fan, Z. Zhao, Noise-resistant and aberration-free synthetic aperture digital holographic microscopy for chip topography reconstruction, Optics Ex- press 33 (19) (2025) 40392–40406
2025
-
[33]
Abbasian, T
V. Abbasian, T. Pahl, L. Hüser, S. Lecler, P. Montgomery, P. Lehmann, A. Darafsheh, Microsphere-assisted quantitative phase microscopy: a review, Light: Advanced Manufacturing 5 (1) (2024) 133–152
2024
-
[34]
M. G. Gustafsson, Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy, Journal of microscopy 198 (2) (2000) 82–87
2000
-
[35]
H. Lee, J. Kim, J. Kim, P. Jeon, S. A. Lee, D. Kim, Noniterative sub- pixelshiftingsuper-resolutionlenslessdigitalholography, OpticsExpress 29 (19) (2021) 29996–30006
2021
-
[36]
M. J. Rust, M. Bates, X. Zhuang, Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (storm), Nature methods 3 (10) (2006) 793–796. 29
2006
-
[37]
Betzig, G
E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, H. F. Hess, Imaging intracellular fluorescent proteins at nanometer resolution, sci- ence 313 (5793) (2006) 1642–1645
2006
-
[38]
K. Wang, L. Song, C. Wang, Z. Ren, G. Zhao, J. Dou, J. Di, G. Barbas- tathis, R. Zhou, J. Zhao, et al., On the use of deep learning for phase recovery, Light: Science & Applications 13 (1) (2024) 4
2024
-
[39]
H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Günaydın, L. A. Ben- tolila, C. Kural, A. Ozcan, Deep learning enables cross-modality super- resolution in fluorescence microscopy, Nature methods 16 (1) (2019) 103–110
2019
-
[40]
Nehme, L
E. Nehme, L. E. Weiss, T. Michaeli, Y. Shechtman, Deep-storm: super- resolution single-molecule microscopy by deep learning, Optica 5 (4) (2018) 458–464
2018
- [41]
-
[42]
H. Yu, Y. Kim, D. Yang, W. Seo, Y. Kim, J.-Y. Hong, H. Song, G. Sung, Y. Sung, S.-W. Min, et al., Deep learning-based incoherent holographic camera enabling acquisition of real-world holograms for holographic streaming system, Nature Communications 14 (1) (2023) 3534
2023
-
[43]
Zhong, X
C. Zhong, X. Sang, B. Yan, H. Li, X. Xie, X. Qin, S. Chen, Real-time 4k computer-generated hologram based on encoding conventional neural network with learned layered phase, Scientific Reports 13 (1) (2023) 19372
2023
-
[44]
Z. Jin, Q. Ren, T. Chen, Z. Dai, F. Shu, B. Fang, Z. Hong, C. Shen, S. Mei, Vision transformer empowered physics-driven deep learning for omnidirectional three-dimensional holography, Optics Express 32 (8) (2024) 14394–14404
2024
-
[45]
N. Liu, K. Liu, Y. Yang, Y. Peng, L. Cao, Propagation-adaptive 4k computer-generated holography using physics-constrained spatial and fourier neural operator, Nature Communications 16 (1) (2025) 7761. 30
2025
-
[46]
Matsushima, Introduction to Computer Holography: Creating Computer-Generated Holograms as the Ultimate 3D Image, Springer Nature, 2020
K. Matsushima, Introduction to Computer Holography: Creating Computer-Generated Holograms as the Ultimate 3D Image, Springer Nature, 2020
2020
-
[47]
Z. He, X. Sui, G. Jin, D. Chu, L. Cao, Optimal quantization for am- plitude and phase in computer-generated holography, Optics Express 29 (1) (2020) 119–133
2020
-
[48]
1120–1128
M.Arjovsky, A.Shah, Y.Bengio, Unitaryevolutionrecurrentneuralnet- works, in: International conference on machine learning, PMLR, 2016, pp. 1120–1128
2016
-
[49]
Guberman, On complex valued convolutional neural networks, arXiv preprint arXiv:1602.09046 (2016)
N. Guberman, On complex valued convolutional neural networks, arXiv preprint arXiv:1602.09046 (2016)
-
[50]
Zhang, Y
Y. Zhang, Y. Tian, Y. Kong, B. Zhong, Y. Fu, Residual dense network for image super-resolution, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 2472–2481
2018
-
[51]
B. Lim, S. Son, H. Kim, S. Nah, K. Mu Lee, Enhanced deep residual networks for single image super-resolution, in: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2017, pp. 136–144
2017
-
[52]
W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D.Rueckert, Z.Wang, Real-timesingleimageandvideosuper-resolution using an efficient sub-pixel convolutional neural network, in: Proceed- ings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1874–1883
2016
-
[53]
J. Seo, J. Lee, J. Lee, H. Ko, Deep compression network for enhanc- ing numerical reconstruction quality of full-complex holograms, Optics Express 31 (15) (2023) 24573–24597
2023
-
[54]
J. Barg, C. Lee, C. Lee, M. Jang, Adaptable deep learning for holo- graphic microscopy: a case study on tissue type and system variability in label-free histopathology, Advanced Photonics Nexus 4 (2) (2025) 026005–026005
2025
-
[55]
J. Shi, X. Zhu, H. Wang, L. Song, Q. Guo, Label enhanced and patch baseddeeplearningforphaseretrievalfromsingleframefringepatternin 31 fringe projection 3d measurement, Optics express 27 (20) (2019) 28929– 28943
2019
-
[56]
B. Chen, Z. Li, Y. Zhou, Y. Zhang, J. Jia, Y. Wang, Deep-learning mul- tiscale digital holographic intensity and phase reconstruction, Applied Sciences 13 (17) (2023) 9806
2023
-
[57]
Zhang, J
Y. Zhang, J. Zhao, Q. Fan, W. Zhang, S. Yang, Improving the recon- struction quality with extension and apodization of the digital hologram, Applied optics 48 (16) (2009) 3070–3074
2009
-
[58]
Chang, D
S. Chang, D. Wang, Y. Wang, J. Zhao, L. Rong, Improving the phase measurement by the apodization filter in the digital holography, in: Holography, Diffractive Optics, and Applications V, Vol. 8556, SPIE, 2012, pp. 342–348
2012
-
[59]
Nagahama, Reducing ringing artifacts for hologram reconstruction by extracting patterns of ringing artifacts, Optics Continuum 2 (2) (2023) 361–369
Y. Nagahama, Reducing ringing artifacts for hologram reconstruction by extracting patterns of ringing artifacts, Optics Continuum 2 (2) (2023) 361–369
2023
-
[60]
Chakravarthula, Y
P. Chakravarthula, Y. Peng, J. Kollin, H. Fuchs, F. Heide, Wirtinger holographyfornear-eyedisplays, ACMTransactionsonGraphics(TOG) 38 (6) (2019) 1–13
2019
-
[61]
Zhang, N
J. Zhang, N. Pégard, J. Zhong, H. Adesnik, L. Waller, 3d computer- generated holography by non-convex optimization, Optica 4 (10) (2017) 1306–1313
2017
-
[62]
Matsushima, T
K. Matsushima, T. Shimobaba, Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields, Optics express 17 (22) (2009) 19662–19673
2009
-
[63]
Zhang, P
R. Zhang, P. Isola, A. A. Efros, E. Shechtman, O. Wang, The unreason- able effectiveness of deep features as a perceptual metric, in: Proceed- ings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 586–595
2018
-
[64]
E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, W. Chen, et al., Lora: Low-rank adaptation of large language models., Iclr 1 (2) (2022) 3. 32
2022
-
[65]
Roosendaal, Big buck bunny, in: ACM SIGGRAPH ASIA 2008 com- puter animation festival, 2008, pp
T. Roosendaal, Big buck bunny, in: ACM SIGGRAPH ASIA 2008 com- puter animation festival, 2008, pp. 62–62
2008
-
[66]
J. Cai, H. Zeng, H. Yong, Z. Cao, L. Zhang, Toward real-world single image super-resolution: A new benchmark and a new model, in: Pro- ceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 3086–3095
2019
-
[67]
L. Yang, B. Kang, Z. Huang, Z. Zhao, X. Xu, J. Feng, H. Zhao, Depth anything v2, Advances in Neural Information Processing Systems 37 (2024) 21875–21911
2024
-
[68]
S. Chen, H. Guo, S. Zhu, F. Zhang, Z. Huang, J. Feng, B. Kang, Video depth anything: Consistent depth estimation for super-long videos, in: Proceedings of the Computer Vision and Pattern Recognition Confer- ence, 2025, pp. 22831–22840
2025
-
[69]
Decoupled Weight Decay Regularization
I. Loshchilov, F. Hutter, Decoupled weight decay regularization, arXiv preprint arXiv:1711.05101 (2017)
work page internal anchor Pith review Pith/arXiv arXiv 2017
-
[70]
SGDR: Stochastic Gradient Descent with Warm Restarts
I. Loshchilov, F. Hutter, Sgdr: Stochastic gradient descent with warm restarts, arXiv preprint arXiv:1608.03983 (2016)
work page internal anchor Pith review arXiv 2016
-
[71]
Ledig, L
C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al., Photo-realistic single im- age super-resolution using agenerative adversarialnetwork, in: Proceed- ings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4681–4690. 33
2017
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.