Recognition: 2 theorem links
· Lean TheoremRelightable Gaussian Splatting for Virtual Production Using Image-Based Illumination
Pith reviewed 2026-05-12 01:47 UTC · model grok-4.3
The pith
Gaussian splatting relights virtual production scenes by directly sampling known LED background textures through per-primitive parameters.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central claim is that a Gaussian Splatting pipeline conditioned on known VP background imagery can separate fixed scene appearance from variable lighting, parameterize light transport per primitive with UV coordinates, intensity, and mipmap resolution modifiers, and thereby render relit scenes without environment maps or physically-based rendering by directly sampling the background texture in image space.
What carries the argument
Per-primitive parameterization of UV coordinates, intensity values, and resolution modifiers that enable direct mipmap-based sampling of the known background texture in image space to model lighting.
If this is right
- The decomposition into fixed appearance and variable lighting lets background content be edited independently of scene lighting.
- Rendering runs in under 2 hours training time and achieves about 35 frames per second with low memory use.
- The rasterizer can output depth, lighting intensity, lighting color, and unlit renders as additional channels.
- The method supports near-field, high-resolution image-based lighting typical of VP stages rather than far-field assumptions.
Where Pith is reading between the lines
- Post-production workflows could treat relighting as a simple background-image swap after the initial capture pass.
- The same parameterization might extend to scenes with moving LED content if the UV sampling is updated frame by frame.
- Because no explicit light transport simulation is required, the approach could be combined with existing Gaussian Splatting tools for real-time VP previews.
Load-bearing premise
Direct image-space sampling of the background texture through per-primitive parameters can accurately capture complex light transport phenomena such as reflections and refractions without explicit physics simulation.
What would settle it
Capture a new set of real VP scenes with changed background content on the LED walls, render the relit version using the method, and check whether the output matches ground-truth photographs of the same scene under the new illumination, especially in regions showing reflections or refractions.
Figures
read the original abstract
Virtual production (VP) use LED walls to provide both background imagery and image-based lighting. While this enables on-set compositing, it couples lighting to background and scene appearance, limiting flexibility for downstream editing. In addition, inverse rendering conventionally relies on physically-based rendering to estimates 3D geometry and lighting, using environment maps. However, these maps are typically low-resolution and assume far-field lighting. In VP, with near-field and high-resolution image-based lighting, this can lead to inaccuracies and introduce complexities when editing. Addressing this, we propose a VP-specific framework for 3D reconstruction and relighting using Gaussian Splatting. This uses the known background imagery to condition the relighting process. This avoids relying on environment maps and reduces compositing to a background-image editing task. To realize our framework, we introduce a process (and associated dataset) that captures real VP scenes under varying background content and illumination conditions. This data is used to decompose a 3D scene into fixed appearance and variable lighting components. The variable lighting process simulates light transport by parameterizing each primitive with a UV coordinate, intensity value and resolution modifier. Using mipmaps, these directly sample the background texture in image space - implicitly capturing reflections and refractions without physically-based rendering. Combined with the fixed appearance component, this allows us to render relit scenes using a Gaussian Splatting rasterizer. Compared to baselines, our approach achieves higher-quality 3D reconstruction and controllable relighting. The method is efficient (<3 GB RAM, <5 GB VRAM, <2 hours training, ~35 FPS) and supports rendering useful arbitrary output variables including depth, lighting intensity, lighting color, and unlit renders.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes a Gaussian Splatting framework tailored to virtual production with LED walls, where known background imagery conditions the relighting. The scene is decomposed into a fixed appearance component and a variable lighting component; each 3D Gaussian is augmented with a static UV coordinate, scalar intensity, and mipmap resolution modifier that directly samples the LED-wall texture in image space during rasterization, thereby implicitly encoding light-transport effects (reflections, refractions) without explicit PBR or environment maps. A new capture dataset of real VP scenes under varying backgrounds is introduced to train the decomposition. The method is reported to yield higher-quality reconstruction and controllable relighting than baselines while remaining efficient (<3 GB RAM, <5 GB VRAM, <2 h training, ~35 FPS) and able to output auxiliary maps (depth, lighting intensity/color, unlit).
Significance. If the implicit light-transport assumption holds, the work offers a practical advance for VP pipelines by turning background editing into a compositing task and avoiding the resolution and far-field limitations of environment-map approaches. The efficiency numbers and auxiliary-output capability are concrete strengths for on-set use. The absence of quantitative metrics, however, makes it difficult to gauge the magnitude of the claimed quality gain or the robustness of the decomposition.
major comments (3)
- [Abstract / variable-lighting parameterization] Abstract and method description: the central claim that the fixed per-primitive UV/intensity/resolution-modifier parameterization implicitly captures complex, view-dependent light transport (reflections, refractions) for arbitrary new near-field backgrounds is load-bearing yet unsupported by any derivation, ablation, or ground-truth comparison. Because the UV is optimized once and does not depend on surface normal or view direction, any geometric mismatch will produce incorrect incident radiance when the background changes; this directly undermines both the relighting controllability and the reported quality advantage.
- [Abstract] Abstract: the repeated assertion of 'higher-quality 3D reconstruction and controllable relighting' is presented without any quantitative metrics, error tables, PSNR/SSIM values, or statistical comparison against the cited baselines. In the absence of such evidence the superiority claim cannot be evaluated and the efficiency numbers alone do not establish the reconstruction quality.
- [Dataset and decomposition] Dataset and decomposition process: the manuscript provides no concrete description of how the fixed-appearance versus variable-lighting separation is performed, what loss terms enforce the decomposition, or how the captured multi-background sequences are used to validate that the learned mapping generalizes to unseen LED content. These details are required to assess whether the implicit-transport assumption is actually realized.
minor comments (2)
- [Abstract / experiments] The efficiency claims would be strengthened by reporting exact hardware (GPU model, CPU), memory profiling methodology, and per-scene timing breakdowns rather than upper-bound ranges.
- [Method] Notation for the per-Gaussian parameters (UV, intensity, resolution modifier) should be introduced with explicit symbols and a short table relating them to the rasterization pipeline.
Simulated Author's Rebuttal
We thank the referee for their thoughtful review and constructive suggestions. We address each of the major comments below, providing clarifications and indicating the revisions we will make to the manuscript.
read point-by-point responses
-
Referee: [Abstract / variable-lighting parameterization] Abstract and method description: the central claim that the fixed per-primitive UV/intensity/resolution-modifier parameterization implicitly captures complex, view-dependent light transport (reflections, refractions) for arbitrary new near-field backgrounds is load-bearing yet unsupported by any derivation, ablation, or ground-truth comparison. Because the UV is optimized once and does not depend on surface normal or view direction, any geometric mismatch will produce incorrect incident radiance when the background changes; this directly undermines both the relighting controllability and the reported quality advantage.
Authors: We agree that additional justification is needed for how the parameterization implicitly encodes light transport. The UV coordinates, intensity, and mipmap level are optimized jointly with the Gaussian parameters to minimize the difference between rendered and captured images across multiple background conditions. This allows the model to learn an effective mapping that accounts for the observed effects of reflections and refractions in the training data. While the UV is fixed per primitive, the view-dependent nature arises from the splatting and alpha blending in the rasterizer. We will revise the method section to include a more thorough explanation of this mechanism, an ablation study removing individual components of the parameterization, and a discussion of limitations related to geometric accuracy and view-dependent effects. revision: yes
-
Referee: [Abstract] Abstract: the repeated assertion of 'higher-quality 3D reconstruction and controllable relighting' is presented without any quantitative metrics, error tables, PSNR/SSIM values, or statistical comparison against the cited baselines. In the absence of such evidence the superiority claim cannot be evaluated and the efficiency numbers alone do not establish the reconstruction quality.
Authors: The current manuscript emphasizes qualitative visual comparisons and practical efficiency metrics suitable for virtual production workflows. To provide a more rigorous evaluation, we will add quantitative results including PSNR, SSIM, and LPIPS metrics for both novel view synthesis and relighting tasks, with comparisons to the relevant baselines. revision: yes
-
Referee: [Dataset and decomposition] Dataset and decomposition process: the manuscript provides no concrete description of how the fixed-appearance versus variable-lighting separation is performed, what loss terms enforce the decomposition, or how the captured multi-background sequences are used to validate that the learned mapping generalizes to unseen LED content. These details are required to assess whether the implicit-transport assumption is actually realized.
Authors: The separation is enforced by training the model on sequences captured with varying LED wall backgrounds, where the fixed appearance parameters are shared across all conditions and the variable lighting parameters adapt to the background changes. The loss combines photometric reconstruction losses with regularization to encourage the decomposition. We will expand the dataset and method sections with precise details on the capture protocol, the full loss formulation, and results demonstrating generalization to novel background images not seen during training. revision: yes
Circularity Check
No significant circularity; empirical data-driven parameterization with independent validation
full rationale
The paper's core contribution is an empirical decomposition of VP scenes into fixed-appearance and variable-lighting Gaussian primitives, where the latter are optimized from captured multi-background training data to attach static UV, intensity, and mipmap parameters that sample the known LED-wall texture at render time. No equations, uniqueness theorems, or first-principles derivations are presented that reduce to their own inputs by construction. The parameterization is explicitly fitted rather than derived, and the claim of implicit light-transport capture is presented as a modeling assumption tested against baselines and held-out backgrounds, not as a self-referential prediction. No self-citations appear in the load-bearing steps, and the method remains self-contained against external benchmarks (real VP captures, quantitative metrics, and efficiency numbers).
Axiom & Free-Parameter Ledger
free parameters (2)
- per-primitive intensity value
- resolution modifier
axioms (1)
- domain assumption Known background imagery from LED walls provides sufficient conditioning for accurate near-field relighting
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
each Gaussian is parameterized by a learnable UV coordinate … intensity value and resolution modifier. Using mipmaps, these directly sample the background texture in image space - implicitly capturing reflections and refractions without physically-based rendering
-
IndisputableMonolith/Foundation/AlphaCoordinateFixation.leanJ_uniquely_calibrated_via_higher_derivative unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
decompose a 3D scene into fixed appearance and variable lighting components
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
- [1]
-
[2]
Artificial Intelligence Review , volume=
Intelligent Cinematography: a review of AI research for cinematographic production , author=. Artificial Intelligence Review , volume=. 2025 , publisher=
work page 2025
-
[3]
ACM SIGGRAPH 2024 conference papers , pages=
2d gaussian splatting for geometrically accurate radiance fields , author=. ACM SIGGRAPH 2024 conference papers , pages=
work page 2024
-
[4]
Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
Mip-splatting: Alias-free 3d gaussian splatting , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
-
[5]
Advances in Neural Information Processing Systems (NeurIPS) , year=
PyTorch: An Imperative Style, High-Performance Deep Learning Library , author=. Advances in Neural Information Processing Systems (NeurIPS) , year=
- [6]
-
[7]
European Conference on Computer Vision , pages=
Relightable 3d gaussians: Realistic point cloud relighting with brdf decomposition and ray tracing , author=. European Conference on Computer Vision , pages=. 2024 , organization=
work page 2024
-
[8]
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=
Gs-ir: 3d gaussian splatting for inverse rendering , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=
-
[9]
Proceedings of the Computer Vision and Pattern Recognition Conference , pages=
Rng: Relightable neural gaussians , author=. Proceedings of the Computer Vision and Pattern Recognition Conference , pages=
-
[10]
Splatography: Sparse multi-view dynamic Gaussian Splatting for filmmaking challenges
Splatography: Sparse multi-view dynamic Gaussian Splatting for filmmaking challenges , author=. arXiv preprint arXiv:2511.05152 , year=
work page internal anchor Pith review Pith/arXiv arXiv
-
[11]
Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
Neural 3d video synthesis from multi-view video , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
-
[12]
Proceedings of the 32nd ACM International Conference on Multimedia , pages=
Prtgs: Precomputed radiance transfer of gaussian splats for real-time high-quality relighting , author=. Proceedings of the 32nd ACM International Conference on Multimedia , pages=
-
[13]
SIGGRAPH Asia 2024 Conference Papers , pages=
Gs3: Efficient relighting with triple gaussian splatting , author=. SIGGRAPH Asia 2024 Conference Papers , pages=
work page 2024
-
[14]
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence , pages=
2D Gaussian splatting for outdoor scene decomposition and relighting , author=. Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence , pages=
-
[15]
2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) , pages=
LumiGauss: Relightable Gaussian Splatting in the Wild , author=. 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) , pages=. 2025 , organization=
work page 2025
-
[16]
Proceedings of the 33rd ACM International Conference on Multimedia , pages=
Tsgs: Improving gaussian splatting for transparent surface reconstruction via normal and de-lighting priors , author=. Proceedings of the 33rd ACM International Conference on Multimedia , pages=
-
[17]
Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=
Gaussian splatting with discretized sdf for relightable assets , author=. Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=
-
[18]
IEEE Transactions on Pattern Analysis and Machine Intelligence , year=
DeferredGS: Decoupled and relightable Gaussian splatting with deferred shading , author=. IEEE Transactions on Pattern Analysis and Machine Intelligence , year=
-
[19]
arXiv preprint arXiv:2410.02619 , year=
Gi-gs: Global illumination decomposition on gaussian splatting for inverse rendering , author=. arXiv preprint arXiv:2410.02619 , year=
-
[20]
3dsceneeditor: Controllable 3d scene editing with gaussian splatting , author=. arXiv preprint arXiv:2412.01583 , year=
-
[21]
Computational Visual Media , volume=
Recent advances in 3d gaussian splatting , author=. Computational Visual Media , volume=. 2024 , publisher=
work page 2024
-
[22]
European Conference on Computer Vision , pages=
Texture-gs: Disentangling the geometry and texture for 3d gaussian splatting editing , author=. European Conference on Computer Vision , pages=. 2024 , organization=
work page 2024
-
[23]
arXiv preprint arXiv:2311.17907 , year=
Cg3d: Compositional generation for text-to-3d via gaussian splatting , author=. arXiv preprint arXiv:2311.17907 , year=
-
[24]
Gsedit: Efficient text-guided editing of 3d objects via gaussian splatting
Gsedit: Efficient text-guided editing of 3d objects via gaussian splatting , author=. arXiv preprint arXiv:2403.05154 , year=
-
[25]
European conference on computer vision , pages=
View-consistent 3d editing with gaussian splatting , author=. European conference on computer vision , pages=. 2024 , organization=
work page 2024
-
[26]
2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) , pages=
Localized gaussian splatting editing with contextual awareness , author=. 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) , pages=. 2025 , organization=
work page 2025
-
[27]
UniRelight: Learning Joint Decomposition and Synthesis for Video Relighting , author=. arXiv preprint arXiv:2506.15673 , year=
-
[28]
Proceedings of the Computer Vision and Pattern Recognition Conference , pages=
SynthLight: Portrait Relighting with Diffusion Model by Learning to Re-render Synthetic Faces , author=. Proceedings of the Computer Vision and Pattern Recognition Conference , pages=
-
[29]
Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=
LightSwitch: Multi-view Relighting with Material-guided Diffusion , author=. Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=
- [30]
-
[31]
Study of statistical methods for texture analysis and their modern evolutions , author=. Engineering Reports , volume=. 2020 , publisher=
work page 2020
-
[32]
Image Texture Feature Extraction Based on Gabor Transform , author=. Rev. T
- [33]
-
[34]
Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
4d gaussian splatting for real-time dynamic scene rendering , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
- [35]
- [36]
-
[37]
Proceedings of the 10th annual conference on Computer graphics and interactive techniques , pages=
Pyramidal parametrics , author=. Proceedings of the 10th annual conference on Computer graphics and interactive techniques , pages=
-
[38]
Fundamentals of Texture Mapping and Image Warping , author=. 1986 , school=
work page 1986
-
[39]
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=
Tensoir: Tensorial inverse rendering , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=
-
[40]
European Conference on Computer Vision , pages=
Trinerflet: A wavelet based triplane nerf representation , author=. European Conference on Computer Vision , pages=. 2024 , organization=
work page 2024
-
[41]
European conference on computer vision , pages=
Tensorf: Tensorial radiance fields , author=. European conference on computer vision , pages=. 2022 , organization=
work page 2022
-
[42]
arXiv preprint arXiv:2511.06271 , year=
RelightMaster: Precise Video Relighting with Multi-plane Light Images , author=. arXiv preprint arXiv:2511.06271 , year=
-
[43]
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=
Hexplane: A fast representation for dynamic scenes , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=
-
[44]
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=
K-planes: Explicit radiance fields in space, time, and appearance , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=
-
[45]
Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
Efficient geometry-aware 3d generative adversarial networks , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=
- [46]
- [47]
- [48]
-
[49]
arXiv preprint arXiv:2311.00571 , year=
LLaVA-Interactive: An All-in-One Demo for Image Chat, Segmentation, Generation and Editing , author=. arXiv preprint arXiv:2311.00571 , year=
-
[50]
ACM Transactions on Graphics (TOG) , volume=
Neural light transport for relighting and view synthesis , author=. ACM Transactions on Graphics (TOG) , volume=. 2021 , publisher=
work page 2021
-
[51]
Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=
Learning indoor inverse rendering with 3d spatially-varying lighting , author=. Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=
-
[52]
Advances in Neural Information Processing Systems , volume=
Stylegan knows normal, depth, albedo, and more , author=. Advances in Neural Information Processing Systems , volume=
-
[53]
IEEE transactions on pattern analysis and machine intelligence , volume=
Accurate, dense, and robust multiview stereopsis , author=. IEEE transactions on pattern analysis and machine intelligence , volume=. 2009 , publisher=
work page 2009
-
[54]
Archives of Psychology , volume=
A technique for the measurement of attitudes , author=. Archives of Psychology , volume=
-
[55]
IEEE transactions on pattern analysis and machine intelligence , volume=
Shape, illumination, and reflectance from shading , author=. IEEE transactions on pattern analysis and machine intelligence , volume=. 2014 , publisher=
work page 2014
- [56]
-
[57]
Physically Controllable Relighting of Photographs , author=. Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers , pages=
-
[58]
arXiv preprint arXiv:2403.20309 (2024)
Instantsplat: Unbounded sparse-view pose-free gaussian splatting in 40 seconds , author=. arXiv preprint arXiv:2403.20309 , volume=
-
[59]
Artificial Intelligence Review , volume=
A review on 3D Gaussian splatting for sparse view reconstruction , author=. Artificial Intelligence Review , volume=. 2025 , publisher=
work page 2025
-
[60]
European Conference on Computer Vision , pages=
Mvsplat: Efficient 3d gaussian splatting from sparse multi-view images , author=. European Conference on Computer Vision , pages=. 2024 , organization=
work page 2024
-
[61]
European Conference on Computer Vision , pages=
Cor-gs: sparse-view 3d gaussian splatting via co-regularization , author=. European Conference on Computer Vision , pages=. 2024 , organization=
work page 2024
-
[62]
Artificial Intelligence , howpublished=
-
[63]
arXiv preprint arXiv:2207.02774 , year=
Local relighting of real scenes , author=. arXiv preprint arXiv:2207.02774 , year=
-
[64]
IEEE Transactions on Pattern Analysis and Machine Intelligence , volume=
Texture features for browsing and retrieval of image data , author=. IEEE Transactions on Pattern Analysis and Machine Intelligence , volume=. 1996 , publisher=
work page 1996
-
[65]
IEEE Transactions on Systems, Man, and Cybernetics , volume=
Textural features for image classification , author=. IEEE Transactions on Systems, Man, and Cybernetics , volume=. 1973 , publisher=
work page 1973
-
[66]
International Conference on Pattern Recognition , pages=
Outdoor Scene Relighting with Diffusion Models , author=. International Conference on Pattern Recognition , pages=. 2025 , organization=
work page 2025
-
[67]
arXiv preprint arXiv:2506.14549 , year=
DreamLight: Towards Harmonious and Consistent Image Relighting , author=. arXiv preprint arXiv:2506.14549 , year=
-
[68]
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=
Inverserendernet: Learning single image inverse rendering , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=
-
[69]
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=
Stylitgan: Image-based relighting via latent control , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=
-
[70]
Proceedings of the European conference on computer vision (ECCV) , pages=
Cgintrinsics: Better intrinsic image decomposition through physically-based rendering , author=. Proceedings of the European conference on computer vision (ECCV) , pages=
-
[71]
Proceedings of the IEEE conference on computer vision and pattern recognition , pages=
Shading annotations in the wild , author=. Proceedings of the IEEE conference on computer vision and pattern recognition , pages=
-
[72]
Generative Models for Computational Relighting , author=. 2024 , school=
work page 2024
-
[73]
European Conference on Computer Vision , pages=
Deep relighting networks for image light source manipulation , author=. European Conference on Computer Vision , pages=. 2020 , organization=
work page 2020
-
[74]
IEEE Transactions on Image Processing , volume=
Designing an illumination-aware network for deep image relighting , author=. IEEE Transactions on Image Processing , volume=. 2022 , publisher=
work page 2022
-
[75]
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=
Physically inspired dense fusion networks for relighting , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=
-
[76]
arXiv preprint arXiv:2006.07816 , year=
2d image relighting with image-to-image translation , author=. arXiv preprint arXiv:2006.07816 , year=
-
[77]
Proceedings of the Computer Vision and Pattern Recognition Conference , pages=
Generative multiview relighting for 3d reconstruction under extreme illumination variation , author=. Proceedings of the Computer Vision and Pattern Recognition Conference , pages=. 2025 , publisher=
work page 2025
-
[78]
arXiv preprint arXiv:2307.10275 , year=
Survey on controlable image synthesis with deep learning , author=. arXiv preprint arXiv:2307.10275 , year=
-
[79]
European Conference on Computer Vision , pages=
AIM 2020: Scene relighting and illumination estimation challenge , author=. European Conference on Computer Vision , pages=. 2020 , organization=
work page 2020
-
[80]
Computer Graphics Forum , volume=
Deep neural models for illumination estimation and relighting: A survey , author=. Computer Graphics Forum , volume=. 2021 , organization=
work page 2021
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.