Recognition: 2 theorem links
· Lean TheoremGuardMarkGS: Unified Ownership Tracing and Edit Deterrence for 3D Gaussian Splatting
Pith reviewed 2026-05-14 20:18 UTC · model grok-4.3
The pith
A single optimization framework embeds ownership watermarks into 3D Gaussian Splatting while diverting unauthorized edits.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central claim is that a scene-wide watermarking objective combined with an adversarial edit-deterrence objective, balanced through an update-saliency-motivated Gaussian selection strategy, produces 3DGS representations that support reliable ownership tracing and resist instruction-driven editing while preserving rendering fidelity.
What carries the argument
The update-saliency-motivated Gaussian selection strategy that assigns stronger adversarial updates to mask-selected Gaussians, operating together with latent-anchor separation, denoising-trajectory diversion, and cross-attention diversion.
If this is right
- Ownership can be traced after unauthorized release through high bit-accuracy watermark recovery.
- Instruction-driven editing attempts are diverted, lowering the chance of successful malicious changes.
- Rendering quality stays comparable to unprotected models on benchmarks such as Mip-NeRF 360.
- Both protections are achieved inside a single training loop rather than through separate post-processing.
Where Pith is reading between the lines
- The same joint-objective structure could be tested on other 3D scene representations that use explicit primitives.
- Editing algorithms may evolve to include explicit countermeasures against trajectory diversion.
- Widespread use could change licensing practices for shared 3D assets by making verification built-in.
- Further trials with varied editing prompts would clarify the range of instructions the deterrence covers.
Load-bearing premise
The combined watermarking and adversarial objectives can be balanced via Gaussian selection without introducing artifacts that advanced editing methods could bypass or that would degrade rendering fidelity below acceptable levels.
What would settle it
An experiment in which a new editing method produces high-quality modified 3DGS outputs while watermark bit accuracy falls below reliable detection thresholds.
Figures
read the original abstract
3D Gaussian Splatting (3DGS) is becoming a practical representation for novel view synthesis, but its growing adoption, together with rapid advances in instruction-driven 3DGS editing, also exposes a dual copyright risk: once a 3DGS-based asset is released, it can be used without permission and manipulated through 3D editing. Existing protection methods address only one side of this problem. Watermarking can trace ownership after unauthorized use, but it cannot prevent malicious editing. Adversarial edit-deterrence methods can disrupt editing, but they do not provide evidence of ownership. To the best of our knowledge, we present the first unified protection framework for 3DGS that jointly optimizes ownership tracing and unauthorized editing deterrence. Our framework combines a scene-wide watermarking objective over all Gaussians with an adversarial objective for edit deterrence. The adversarial branch combines latent-anchor separation, denoising-trajectory diversion, and cross-attention diversion to divert the editing trajectory, while an update-saliency-motivated Gaussian selection strategy assigns stronger adversarial updates to mask-selected Gaussians, improving the balance among watermark recovery, edit deterrence, and rendering fidelity. Experiments on scenes from Mip-NeRF 360 and Instruct-NeRF2NeRF demonstrate that the proposed framework achieves a favorable balance among bit accuracy, edit deterrence, and rendering quality. These results suggest that practical copyright protection of 3DGS-based assets can be more effectively addressed by integrating ownership tracing and unauthorized editing deterrence into a single optimization framework.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces GuardMarkGS as the first unified framework for 3D Gaussian Splatting (3DGS) that jointly optimizes ownership tracing via scene-wide watermarking and unauthorized editing deterrence via an adversarial branch. The adversarial objectives combine latent-anchor separation, denoising-trajectory diversion, and cross-attention diversion, with an update-saliency-motivated Gaussian selection strategy to assign stronger updates to selected Gaussians and balance watermark recovery, edit deterrence, and rendering fidelity. Experiments on Mip-NeRF 360 and Instruct-NeRF2NeRF scenes are reported to achieve a favorable balance among bit accuracy, edit deterrence, and rendering quality.
Significance. If the joint optimization and selection strategy prove robust, the work would provide a timely practical advance for copyright protection of 3DGS assets by addressing both tracing after unauthorized use and prevention of instruction-driven edits within one framework, where prior methods handled only one aspect. The combination of multiple diversion mechanisms with saliency-based masking could offer better trade-offs than separate watermarking or adversarial approaches.
major comments (3)
- [Abstract] Abstract: The claim of achieving a 'favorable balance' on Mip-NeRF 360 and Instruct-NeRF2NeRF scenes is presented without any quantitative metrics (bit accuracy, PSNR, edit success rates), baselines, ablation studies, or error bars. This leaves the central claim of effective joint optimization without verifiable support in the provided summary.
- [Framework description] Framework and selection strategy: The update-saliency-motivated Gaussian selection is load-bearing for balancing the objectives, yet no analysis shows robustness to adaptive editors that could target non-selected Gaussians to bypass deterrence while preserving watermark recovery and fidelity. This directly affects the weakest assumption noted in the review.
- [Adversarial branch] Adversarial components: Latent-anchor separation, denoising-trajectory diversion, and cross-attention diversion are introduced as new elements without explicit reduction to prior parameters or a derivation showing they are necessary and non-redundant for the unified claim.
minor comments (1)
- [Abstract] Abstract: Include at least one concrete numerical result or pointer to a results table to substantiate the 'favorable balance' statement.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our manuscript. We address each major comment point by point below, proposing targeted revisions to improve clarity and support for our claims while maintaining the core contributions of the unified GuardMarkGS framework.
read point-by-point responses
-
Referee: [Abstract] Abstract: The claim of achieving a 'favorable balance' on Mip-NeRF 360 and Instruct-NeRF2NeRF scenes is presented without any quantitative metrics (bit accuracy, PSNR, edit success rates), baselines, ablation studies, or error bars. This leaves the central claim of effective joint optimization without verifiable support in the provided summary.
Authors: We agree that the abstract would benefit from concrete quantitative support to make the central claim immediately verifiable. In the revised manuscript, we will incorporate key metrics from our experiments (e.g., average bit accuracy, PSNR for rendering fidelity, and edit success rates) along with brief baseline comparisons, while respecting abstract length limits. These values are already reported in detail in the experimental section and tables of the full paper. revision: yes
-
Referee: [Framework description] Framework and selection strategy: The update-saliency-motivated Gaussian selection is load-bearing for balancing the objectives, yet no analysis shows robustness to adaptive editors that could target non-selected Gaussians to bypass deterrence while preserving watermark recovery and fidelity. This directly affects the weakest assumption noted in the review.
Authors: The saliency-based selection is motivated by the observation that editing updates concentrate on a subset of Gaussians; our experiments validate the resulting trade-offs under standard (non-adaptive) editing pipelines from Instruct-NeRF2NeRF. We acknowledge that explicit robustness analysis against adaptive editors deliberately targeting non-selected Gaussians is not provided. We will add a dedicated limitations paragraph discussing this assumption and its implications for future work. revision: partial
-
Referee: [Adversarial branch] Adversarial components: Latent-anchor separation, denoising-trajectory diversion, and cross-attention diversion are introduced as new elements without explicit reduction to prior parameters or a derivation showing they are necessary and non-redundant for the unified claim.
Authors: We will revise the framework section to include explicit derivations linking each diversion mechanism to prior diffusion and attention parameters, together with an ablation study quantifying their individual and joint contributions. This will demonstrate necessity and non-redundancy within the unified optimization. revision: yes
- Comprehensive empirical analysis of robustness against adaptive editors that specifically target non-selected Gaussians
Circularity Check
No circularity: novel components introduced without reduction to inputs or self-citations
full rationale
The paper proposes a new unified framework combining watermarking over all Gaussians with adversarial objectives (latent-anchor separation, denoising-trajectory diversion, cross-attention diversion) and an update-saliency-motivated Gaussian selection strategy. No equations or derivations in the abstract reduce by construction to fitted parameters, prior self-citations, or renamed known results. The selection rule and diversion mechanisms are presented as original contributions that balance objectives without being forced by definition or external self-referential theorems. The framework is self-contained against external benchmarks as a new optimization approach for 3DGS protection.
Axiom & Free-Parameter Ledger
free parameters (2)
- objective balancing weights
- update-saliency threshold
axioms (2)
- domain assumption Instruction-driven editing pipelines such as Instruct-NeRF2NeRF follow predictable denoising and attention trajectories that can be diverted by the proposed objectives.
- domain assumption Scene-wide watermark embedding is compatible with high-fidelity rendering when combined with adversarial updates.
invented entities (3)
-
latent-anchor separation
no independent evidence
-
denoising-trajectory diversion
no independent evidence
-
cross-attention diversion
no independent evidence
Lean theorems connected to this paper
-
IndisputableMonolith/Foundation/RealityFromDistinction.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
adversarial branch combines latent-anchor separation, denoising-trajectory diversion, and cross-attention diversion
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering.ACM Transactions on Graphics, 42(4), July 2023
work page 2023
-
[2]
Srinivasan, Matthew Tancik, Jonathan T
Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors,Computer Vision – ECCV 2020, pages 405–421. Springer International Publishing, 2020
work page 2020
-
[3]
Instant neural graphics primitives with a multiresolution hash encoding.ACM Trans
Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding.ACM Trans. Graph., 41(4), July 2022. doi: 10.1145/3528223. 3530127
-
[4]
Scaffold-gs: Structured 3d gaussians for view-adaptive rendering
Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, and Bo Dai. Scaffold-gs: Structured 3d gaussians for view-adaptive rendering. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20654–20664, 2024
work page 2024
-
[5]
Efros, Aleksander Holynski, and Angjoo Kanazawa
Ayaan Haque, Matthew Tancik, Alexei A. Efros, Aleksander Holynski, and Angjoo Kanazawa. Instruct- nerf2nerf: Editing 3d scenes with instructions. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 19740–19750, October 2023
work page 2023
-
[6]
DGE: direct gaussian 3d editing by consistent multi-view editing
Minghao Chen, Iro Laina, and Andrea Vedaldi. DGE: direct gaussian 3d editing by consistent multi-view editing. InECCV (74), pages 74–92, 2024
work page 2024
-
[7]
GaussianEditor: Swift and Controllable 3D Editing with Gaussian Splatting
Yiwen Chen, Zilong Chen, Chi Zhang, Feng Wang, Xiaofeng Yang, Yikai Wang, Zhongang Cai, Lei Yang, Huaping Liu, and Guosheng Lin. GaussianEditor: Swift and Controllable 3D Editing with Gaussian Splatting . In2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 21476–21485, June 2024
work page 2024
-
[8]
Reid, Philip Torr, and Victor Adrian Prisacariu
Jing Wu, Jia-Wang Bian, Xinghui Li, Guangrun Wang, Ian D. Reid, Philip Torr, and Victor Adrian Prisacariu. Gaussctrl: Multi-view consistent text-driven 3d gaussian splatting editing. InECCV (14), pages 55–71, 2024
work page 2024
-
[9]
Dong In Lee, Hyeongcheol Park, Jiyoung Seo, Eunbyung Park, Hyunje Park, Ha Dam Baek, Sangheon Shin, Sangmin Kim, and Sangpil Kim. Editsplat: Multi-view fusion and attention-guided optimization for view-consistent 3d scene editing with 3d gaussian splatting. InProceedings of the Computer Vision and Pattern Recognition Conference (CVPR), pages 11135–11145, ...
work page 2025
-
[10]
Clip-nerf: Text-and-image driven manipulation of neural radiance fields
Can Wang, Menglei Chai, Mingming He, Dongdong Chen, and Jing Liao. Clip-nerf: Text-and-image driven manipulation of neural radiance fields. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3835–3844, June 2022
work page 2022
-
[11]
Dreameditor: Text-driven 3d scene editing with neural fields
Jingyu Zhuang, Chen Wang, Liang Lin, Lingjie Liu, and Guanbin Li. Dreameditor: Text-driven 3d scene editing with neural fields. InSIGGRAPH Asia 2023 Conference Papers. Association for Computing Machinery, 2023. doi: 10.1145/3610548.3618190
-
[12]
V ox-e: Text-guided voxel editing of 3d objects
Etai Sella, Gal Fiebelman, Peter Hedman, and Hadar Averbuch-Elor. V ox-e: Text-guided voxel editing of 3d objects. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 430–440, October 2023
work page 2023
-
[13]
Hidden: Hiding data with deep networks
Jiren Zhu, Russell Kaplan, Justin Johnson, and Li Fei-Fei. Hidden: Hiding data with deep networks. InComputer Vision – ECCV 2018: 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part XV, page 682–697. Springer-Verlag, 2018. ISBN 978-3-030-01266-3. doi: 10.1007/978-3-030-01267-0_40
-
[14]
The stable signature: Rooting watermarks in latent diffusion models
Pierre Fernandez, Guillaume Couairon, Hervé Jégou, Matthijs Douze, and Teddy Furon. The stable signature: Rooting watermarks in latent diffusion models. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 22466–22477, October 2023
work page 2023
-
[15]
Tree-rings watermarks: Invisible fingerprints for diffusion images
Yuxin Wen, John Kirchenbauer, Jonas Geiping, and Tom Goldstein. Tree-rings watermarks: Invisible fingerprints for diffusion images. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors,Advances in Neural Information Processing Systems, volume 36, pages 58047–58063. Curran Associates, Inc., 2023
work page 2023
-
[16]
Robin: Robust and invisible watermarks for diffusion models with adversarial optimization
Huayang Huang, Yu Wu, and Qian Wang. Robin: Robust and invisible watermarks for diffusion models with adversarial optimization. In A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, editors,Advances in Neural Information Processing Systems, volume 37, pages 3937–3963. Curran Associates, Inc., 2024. doi: 10.52202/079017-0129. 10
-
[17]
Feng, Zhiwen Fan, Panwang Pan, and Zhangyang Wang
Chenxin Li, Brandon Y . Feng, Zhiwen Fan, Panwang Pan, and Zhangyang Wang. Steganerf: Embedding invisible information within neural radiance fields. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 441–453, October 2023
work page 2023
-
[18]
Copyrnerf: Protecting the copyright of neural radiance fields
Ziyuan Luo, Qing Guo, Ka Chun Cheung, Simon See, and Renjie Wan. Copyrnerf: Protecting the copyright of neural radiance fields. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 22401–22411, October 2023
work page 2023
-
[19]
Waterf: Robust watermarks in radiance fields for protection of copyrights
Youngdong Jang, Dong In Lee, MinHyuk Jang, Jong Wook Kim, Feng Yang, and Sangpil Kim. Waterf: Robust watermarks in radiance fields for protection of copyrights. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12087–12097, 2024
work page 2024
-
[20]
Raising the cost of malicious ai-powered image editing
Hadi Salman, Alaa Khaddaj, Guillaume Leclerc, Andrew Ilyas, and Aleksander M ˛ adry. Raising the cost of malicious ai-powered image editing. InProceedings of the 40th International Conference on Machine Learning, 2023
work page 2023
-
[21]
Chumeng Liang, Xiaoyu Wu, Yang Hua, Jiaru Zhang, Yiming Xue, Tao Song, Zhengui Xue, Ruhui Ma, and Haibing Guan. Adversarial example does good: preventing painting imitation from diffusion models via adversarial examples. InProceedings of the 40th International Conference on Machine Learning, 2023
work page 2023
-
[22]
Advpaint: Protecting images from inpainting manipulation via adversarial attention disruption
Joonsung Jeon, Woo Jae Kim, Suhyeon Ha, Sooel Son, and Sung eui Yoon. Advpaint: Protecting images from inpainting manipulation via adversarial attention disruption. InThe Thirteenth International Conference on Learning Representations, 2025
work page 2025
-
[23]
Mist: Towards improved adversarial examples for diffusion models
Chumeng Liang and Xiaoyu Wu. Mist: Towards improved adversarial examples for diffusion models. arXiv preprint arXiv:2305.12683, 2023
-
[24]
Shawn Shan, Jenna Cryan, Emily Wenger, Haitao Zheng, Rana Hanocka, and Ben Y . Zhao. Glaze: Protecting artists from style mimicry by Text-to-Image models. In32nd USENIX Security Symposium (USENIX Security 23), pages 2187–2204, Anaheim, CA, August 2023. USENIX Association. ISBN 978-1-939133-37-3
work page 2023
-
[25]
DEGauss: Defending against malicious 3d editing for gaussian splatting
Lingzhuang Meng, Mingwen Shao, Yuanjian Qiao, and Xiang Lv. DEGauss: Defending against malicious 3d editing for gaussian splatting. InThe Thirty-ninth Annual Conference on Neural Information Processing Systems, 2025
work page 2025
-
[26]
3d-gsw: 3d gaussian splatting for robust watermarking
Youngdong Jang, Hyunje Park, Feng Yang, Heeju Ko, Euijin Choo, and Sangpil Kim. 3d-gsw: 3d gaussian splatting for robust watermarking. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5938–5948, June 2025
work page 2025
-
[27]
Gaussian- marker: Uncertainty-aware copyright protection of 3d gaussian splatting
Xiufeng Huang, Ruiqi Li, Yiu ming Cheung, Ka Chun Cheung, Simon See, and Renjie Wan. Gaussian- marker: Uncertainty-aware copyright protection of 3d gaussian splatting. InThe Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024
work page 2024
-
[28]
Guardsplat: Efficient and robust watermarking for 3d gaussian splatting
Zixuan Chen, Guangcong Wang, Jiahao Zhu, Jianhuang Lai, and Xiaohua Xie. Guardsplat: Efficient and robust watermarking for 3d gaussian splatting. InProceedings of the Computer Vision and Pattern Recognition Conference (CVPR), pages 16325–16335, June 2025
work page 2025
-
[29]
High-Resolution Image Synthesis with Latent Diffusion Models
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. High-Resolution Image Synthesis with Latent Diffusion Models . In2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10674–10685, June 2022
work page 2022
-
[30]
Mip-splatting: Alias-free 3d gaussian splatting
Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, and Andreas Geiger. Mip-splatting: Alias-free 3d gaussian splatting. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19447–19456, June 2024
work page 2024
-
[31]
Antoine Guédon and Vincent Lepetit. Sugar: Surface-aligned gaussian splatting for efficient 3d mesh reconstruction and high-quality mesh rendering. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5354–5363, June 2024
work page 2024
-
[32]
2d gaussian splatting for geometrically accurate radiance fields
Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, and Shenghua Gao. 2d gaussian splatting for geometrically accurate radiance fields. InACM SIGGRAPH 2024 Conference Papers, SIGGRAPH ’24. Association for Computing Machinery, 2024. ISBN 9798400705250. doi: 10.1145/3641519.3657428
-
[33]
Gaussian opacity fields: Efficient adaptive surface reconstruction in unbounded scenes.ACM Trans
Zehao Yu, Torsten Sattler, and Andreas Geiger. Gaussian opacity fields: Efficient adaptive surface reconstruction in unbounded scenes.ACM Trans. Graph., 43(6), November 2024. ISSN 0730-0301. doi: 10.1145/3687937. 11
-
[34]
Dreamgaussian: Generative gaussian splatting for efficient 3d content creation
Jiaxiang Tang, Jiawei Ren, Hang Zhou, Ziwei Liu, and Gang Zeng. Dreamgaussian: Generative gaussian splatting for efficient 3d content creation. In B. Kim, Y . Yue, S. Chaudhuri, K. Fragkiadaki, M. Khan, and Y . Sun, editors,International Conference on Learning Representations, volume 2024, pages 33879–33896, 2024
work page 2024
-
[35]
Gaussiandreamer: Fast generation from text to 3d gaussians by bridging 2d and 3d diffusion models
Taoran Yi, Jiemin Fang, Junjie Wang, Guanjun Wu, Lingxi Xie, Xiaopeng Zhang, Wenyu Liu, Qi Tian, and Xinggang Wang. Gaussiandreamer: Fast generation from text to 3d gaussians by bridging 2d and 3d diffusion models. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6796–6807, June 2024
work page 2024
-
[36]
Thanh Van Le, Hao Phung, Thuan Hoang Nguyen, Quan Dao, Ngoc N. Tran, and Anh Tran. Anti- dreambooth: Protecting users from personalized text-to-image synthesis. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 2116–2127, October 2023
work page 2023
-
[37]
Tim Brooks, Aleksander Holynski, and Alexei A. Efros. InstructPix2Pix: Learning to Follow Image Editing Instructions . In2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 18392–18402, June 2023
work page 2023
-
[38]
Denoising diffusion probabilistic models
Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors,Advances in Neural Information Processing Systems, volume 33, pages 6840–6851. Curran Asso- ciates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/ 4c5bcfec8584af0d967f1...
work page 2020
-
[39]
Classifier-free diffusion guidance
Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. InNeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications, 2021. URL https://openreview.net/forum?id= qw8AKxfYbI
work page 2021
-
[40]
S. D. Lin and Chin-Feng Chen. A robust dct-based watermarking for copyright protection.IEEE Trans. on Consum. Electron., 46(3):415–421, August 2000. ISSN 0098-3063. doi: 10.1109/30.883387
-
[41]
Efros, Eli Shechtman, and Oliver Wang
Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018
work page 2018
-
[42]
Berg, Wan-Yen Lo, Piotr Dollar, and Ross Girshick
Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollar, and Ross Girshick. Segment anything. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 4015–4026, October 2023
work page 2023
-
[43]
Null-text inversion for editing real images using guided diffusion models
Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Null-text inversion for editing real images using guided diffusion models. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6038–6047, June 2023
work page 2023
-
[44]
Prompt-to- prompt image editing with cross-attention control
Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Prompt-to- prompt image editing with cross-attention control. InInternational Conference on Learning Representations (ICLR), 2023
work page 2023
-
[45]
Barron, Ben Mildenhall, Dor Verbin, Pratul P
Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5470–5479, June 2022
work page 2022
-
[46]
Learning transferable visual models from natural language supervision
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine...
work page 2021
-
[47]
Image quality assessment: from error visibility to structural similarity
Zhou Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli. Image quality assessment: from error visibility to structural similarity.IEEE Transactions on Image Processing, 13(4):600–612, 2004. doi: 10.1109/TIP.2003.819861
-
[48]
Faceshield: Defending facial image against deepfake threats
Jaehwan Jeong, Sumin In, Sieun Kim, Hannie Shin, Jongheon Jeong, Sang Ho Yoon, Jaewook Chung, and Sangpil Kim. Faceshield: Defending facial image against deepfake threats. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 10364–10374, October 2025. 12 Appendix A Broader impact The proposed framework can help creators ...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.