Recognition: no theorem link
Towards Robust Content Watermarking Against Removal and Forgery Attacks
Pith reviewed 2026-05-10 18:02 UTC · model grok-4.3
The pith
A novel watermarking method for AI-generated images resists removal and forgery by dynamically adapting injection to prompt semantics and applying two-sided detection.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
We build a novel watermarking paradigm called Instance-Specific watermarking with Two-Sided detection (ISTS) to resist removal and forgery attacks. Specifically, we introduce a strategy that dynamically controls the injection time and watermarking patterns based on the semantics of users' prompts. Furthermore, we propose a new two-sided detection approach to enhance robustness in watermark detection. Experiments have demonstrated the superiority of our watermarking against removal and forgery attacks.
What carries the argument
The Instance-Specific watermarking with Two-Sided detection (ISTS) paradigm, which dynamically controls injection timing and patterns based on prompt semantics and pairs this with two-sided detection for verification.
If this is right
- Watermarks remain detectable after common removal and forgery attempts on generated images.
- Image quality stays comparable to unwatermarked outputs under the dynamic injection strategy.
- The method provides a unified defense against both removal and forgery in one framework.
- Detection works reliably across different text-to-image diffusion models without model-specific retraining.
Where Pith is reading between the lines
- The semantic adaptation might allow the same framework to handle prompts in other generative domains such as video or 3D content.
- If two-sided detection generalizes, it could reduce false positives in large-scale content verification systems.
- Integration with existing provenance tools might create end-to-end tracking from prompt to final image without extra overhead.
Load-bearing premise
Dynamically adjusting watermark injection time and patterns according to prompt semantics, combined with two-sided detection, will preserve image quality and deliver robust protection against attacks without creating new vulnerabilities or high computational costs.
What would settle it
A test applying standard removal or forgery attacks to ISTS-watermarked images where the detection success rate drops below existing methods or where standard image quality metrics show clear degradation compared to unwatermarked outputs.
Figures
read the original abstract
Generated contents have raised serious concerns about copyright protection, image provenance, and credit attribution. A potential solution for these problems is watermarking. Recently, content watermarking for text-to-image diffusion models has been studied extensively for its effective detection utility and robustness. However, these watermarking techniques are vulnerable to potential adversarial attacks, such as removal attacks and forgery attacks. In this paper, we build a novel watermarking paradigm called Instance-Specific watermarking with Two-Sided detection (ISTS) to resist removal and forgery attacks. Specifically, we introduce a strategy that dynamically controls the injection time and watermarking patterns based on the semantics of users' prompts. Furthermore, we propose a new two-sided detection approach to enhance robustness in watermark detection. Experiments have demonstrated the superiority of our watermarking against removal and forgery attacks.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes Instance-Specific watermarking with Two-Sided detection (ISTS) for text-to-image diffusion models. It dynamically controls watermark injection timing and patterns according to prompt semantics and introduces a two-sided detection mechanism to improve robustness against removal and forgery attacks. The central claim is that experiments demonstrate the superiority of ISTS over prior watermarking techniques in resisting these attacks while preserving image quality.
Significance. If the reported experimental comparisons hold, ISTS would constitute a meaningful advance in robust content watermarking for generative models, directly addressing copyright, provenance, and attribution challenges. The semantics-driven dynamic injection and two-sided detection represent concrete technical innovations that could be adopted or extended by subsequent work.
minor comments (3)
- [Abstract] Abstract: the claim of experimental superiority is stated without any quantitative metrics, baseline names, or attack descriptions; adding one or two key numbers (e.g., detection rates under removal attacks) would make the abstract self-contained.
- [Method] Method description: the precise definition of the two-sided detection statistic and the decision rule for declaring a watermark present or absent should be stated explicitly (ideally with a short equation or algorithm box) so that the robustness claims can be reproduced from the text alone.
- [Experiments] Experiments: while comparisons are reported, the manuscript should include a table summarizing attack parameters (strength, number of queries, etc.) and statistical significance tests for the superiority claims.
Simulated Author's Rebuttal
We thank the referee for the positive assessment of our ISTS watermarking approach and the recommendation for minor revision. We are pleased that the semantics-driven dynamic injection and two-sided detection are viewed as meaningful technical contributions for addressing removal and forgery attacks in diffusion models.
Circularity Check
No significant circularity
full rationale
The paper proposes the ISTS watermarking paradigm as a novel construction (dynamic semantic-based injection timing/patterns plus two-sided detection) and supports its superiority claim solely via experimental comparisons against removal and forgery attacks. No equations, derivations, fitted parameters presented as predictions, or self-citation chains appear in the provided text; the central claims rest on the described method and external empirical results rather than reducing to self-definitional inputs or renamed known patterns. This is the expected non-circular outcome for a method-and-experiment paper without load-bearing mathematical reductions.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Benchmark- ing the robustness of image watermarks
Bang An, Mucong Ding, Tahseen Rabbani, Aakriti Agrawal, Yuancheng Xu, Chenghao Deng, Sicheng Zhu, Abdirisak Mohamed, Yuxin Wen, Tom Goldstein, et al. Benchmark- ing the robustness of image watermarks. ICML, 2024. 2
2024
-
[2]
Teal Witter, Chinmay Hegde, and Niv Cohen
Kasra Arabi, Benjamin Feuer, R. Teal Witter, Chinmay Hegde, and Niv Cohen. Hidden in the noise: Two-stage ro- bust watermarking for images. InThe Thirteenth Interna- tional Conference on Learning Representations, 2025. 2
2025
-
[3]
Teal Witter, Chinmay Hegde, and Niv Co- hen
Kasra Arabi, R. Teal Witter, Chinmay Hegde, and Niv Co- hen. SEAL: Semantic Aware Image Watermarking, 2025. 2, 5, 6
2025
-
[4]
Zhongjie Ba, Yitao Zhang, Peng Cheng, Bin Gong, Xinyu Zhang, Qinglong Wang, and Kui Ren. Robust watermarks leak: Channel-aware feature extraction enables adversarial watermark manipulation.arXiv preprint arXiv:2502.06418,
-
[5]
De-mark: Watermark removal in large language models.arXiv preprint arXiv:2410.13808, 2024
Ruibo Chen, Yihan Wu, Junfeng Guo, and Heng Huang. De- mark: Watermark removal in large language models.arXiv preprint arXiv:2410.13808, 2024. 2
-
[6]
Learned image compression with discretized gaussian mixture likelihoods and attention modules
Zhengxue Cheng, Heming Sun, Masaru Takeuchi, and Jiro Katto. Learned image compression with discretized gaussian mixture likelihoods and attention modules. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7939–7948, 2020. 2
2020
-
[7]
Reproducible scal- ing laws for contrastive language-image learning
Mehdi Cherti, Romain Beaumont, Ross Wightman, Mitchell Wortsman, Gabriel Ilharco, Cade Gordon, Christoph Schuh- mann, Ludwig Schmidt, and Jenia Jitsev. Reproducible scal- ing laws for contrastive language-image learning. InPro- ceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2818–2829, 2023. 7
2023
-
[8]
Undetectable wa- termarks for language models
Miranda Christ, Sam Gunn, and Or Zamir. Undetectable wa- termarks for language models. InThe Thirty Seventh Annual Conference on Learning Theory, pages 1125–1139. PMLR,
-
[9]
Ringid: Rethinking tree-ring watermarking for enhanced multi-key identification
Hai Ci, Pei Yang, Yiren Song, and Mike Zheng Shou. Ringid: Rethinking tree-ring watermarking for enhanced multi-key identification. InEuropean Conference on Com- puter Vision, pages 338–354. Springer, 2025. 1, 2, 3, 6
2025
-
[10]
Digital watermarking.Journal of Electronic Imaging, 11(3):414–414, 2002
Ingemar Cox, Matthew Miller, Jeffrey Bloom, and Chris Honsinger. Digital watermarking.Journal of Electronic Imaging, 11(3):414–414, 2002. 2
2002
-
[11]
Diffusion models beat gans on image synthesis.Advances in neural informa- tion processing systems, 34:8780–8794, 2021
Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis.Advances in neural informa- tion processing systems, 34:8780–8794, 2021. 4, 5
2021
-
[12]
Ziping Dong, Chao Shuai, Zhongjie Ba, Peng Cheng, Zhan Qin, Qinglong Wang, and Kui Ren. Wmcopier: Forging in- visible image watermarks on arbitrary images.arXiv preprint arXiv:2503.22330, 2025. 2
-
[13]
Scaling recti- fied flow transformers for high-resolution image synthesis
Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas M ¨uller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling recti- fied flow transformers for high-resolution image synthesis. InForty-first international conference on machine learning,
-
[14]
The stable signature: Rooting watermarks in latent diffusion models
Pierre Fernandez, Guillaume Couairon, Herv ´e J ´egou, Matthijs Douze, and Teddy Furon. The stable signature: Rooting watermarks in latent diffusion models. InProceed- ings of the IEEE/CVF International Conference on Com- puter Vision, pages 22466–22477, 2023. 2
2023
-
[15]
Zeki Yalniz, and Alexandre Mourachko
Pierre Fernandez, Hady Elsahar, I Zeki Yalniz, and Alexan- dre Mourachko. Video seal: Open and efficient video water- marking.arXiv preprint arXiv:2412.09492, 2024. 2
-
[16]
Sony world photography award 2023: Winner refuses award after revealing AI creation
Paul Glynn. Sony world photography award 2023: Winner refuses award after revealing AI creation. https://www.bbc.com/news/entertainment-arts-65296763,
2023
-
[17]
An undetectable watermark for generative image models.arXiv preprint arXiv:2410.07369, 2024
Sam Gunn, Xuandong Zhao, and Dawn Song. An un- detectable watermark for generative image models.arXiv preprint arXiv:2410.07369, 2024. 1, 2
-
[18]
Freqmark: Invis- ible image watermarking via frequency based optimization in latent space.Advances in Neural Information Processing Systems, 37:112237–112261, 2024
Yiyang Guo, Ruizhe Li, Mude Hui, Hanzhong Guo, Chen Zhang, Chuangjian Cai, Le Wan, et al. Freqmark: Invis- ible image watermarking via frequency based optimization in latent space.Advances in Neural Information Processing Systems, 37:112237–112261, 2024. 2
2024
-
[19]
Watermarking generative tabular data.arXiv preprint arXiv:2405.14018, 2024
Hengzhi He, Peiyu Yu, Junpeng Ren, Ying Nian Wu, and Guang Cheng. Watermarking generative tabular data.arXiv preprint arXiv:2405.14018, 2024. 2
-
[20]
Classifier-Free Diffusion Guidance
Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance.arXiv preprint arXiv:2207.12598, 2022. 3
work page internal anchor Pith review Pith/arXiv arXiv 2022
-
[21]
Denoising dif- fusion probabilistic models.Advances in neural information processing systems, 33:6840–6851, 2020
Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising dif- fusion probabilistic models.Advances in neural information processing systems, 33:6840–6851, 2020. 2
2020
-
[22]
Runyi Hu, Jie Zhang, Yiming Li, Jiwei Li, Qing Guo, Han Qiu, and Tianwei Zhang. Videoshield: Regulating diffusion- based video generation models via watermarking.arXiv preprint arXiv:2501.14195, 2025. 2
-
[23]
Xuming Hu, Hanqian Li, Jungang Li, Yu Huang, and Ai- wei Liu. Videomark: A distortion-free robust watermark- ing framework for video diffusion models.arXiv preprint arXiv:2504.16359, 2025. 2
-
[24]
A transfer attack to image water- marks.arXiv preprint arXiv:2403.15365, 2024
Yuepeng Hu, Zhengyuan Jiang, Moyang Guo, and Neil Zhenqiang Gong. A transfer attack to image water- marks.arXiv preprint arXiv:2403.15365, 2024. 2
-
[25]
Robin: Robust and invisible watermarks for diffusion models with adversarial optimization.Advances in Neural Information Processing Systems, 37:3937–3963, 2024
Huayang Huang, Yu Wu, and Qian Wang. Robin: Robust and invisible watermarks for diffusion models with adversarial optimization.Advances in Neural Information Processing Systems, 37:3937–3963, 2024. 2, 3, 6
2024
-
[26]
Forging and Removing Latent- Noise Diffusion Watermarks Using a Single Image, 2025
Anubhav Jain, Yuya Kobayashi, Naoki Murata, Yuhta Takida, Takashi Shibuya, Yuki Mitsufuji, Niv Cohen, Nasir Memon, and Julian Togelius. Forging and Removing Latent- Noise Diffusion Watermarks Using a Single Image, 2025. 1, 2, 4, 5, 6, 13
2025
-
[27]
Evading watermark based detection of ai-generated content
Zhengyuan Jiang, Jinghuai Zhang, and Neil Zhenqiang Gong. Evading watermark based detection of ai-generated content. InProceedings of the 2023 ACM SIGSAC Con- ference on Computer and Communications Security, pages 1168–1181, 2023. 2
2023
-
[28]
Watermark stealing in large language models.arXiv preprint arXiv:2402.19361, 2024
Nikola Jovanovi ´c, Robin Staab, and Martin Vechev. Wa- termark stealing in large language models.arXiv preprint arXiv:2402.19361, 2024. 2
-
[29]
Unmarker: a univer- sal attack on defensive image watermarking
Andre Kassis and Urs Hengartner. Unmarker: a univer- sal attack on defensive image watermarking. In2025 IEEE Symposium on Security and Privacy (SP), pages 2602–2620. IEEE, 2025. 1
2025
-
[30]
A watermark for large language models
John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, and Tom Goldstein. A watermark for large language models. InInternational Conference on Machine Learning, pages 17061–17084. PMLR, 2023. 2
2023
-
[31]
On the reliability of watermarks for large language models,
John Kirchenbauer, Jonas Geiping, Yuxin Wen, Manli Shu, Khalid Saifullah, Kezhi Kong, Kasun Fernando, Aniruddha Saha, Micah Goldblum, and Tom Goldstein. On the reliabil- ity of watermarks for large language models.arXiv preprint arXiv:2306.04634, 2023. 2
-
[32]
Robust distortion- free watermarks for language models.arXiv preprint arXiv:2307.15593, 2023
Rohith Kuditipudi, John Thickstun, Tatsunori Hashimoto, and Percy Liang. Robust distortion-free watermarks for lan- guage models.arXiv preprint arXiv:2307.15593, 2023. 2
-
[33]
Watermark copy attack
Martin Kutter, Sviatoslav V V oloshynovskiy, and Alexander Herrigel. Watermark copy attack. InSecurity and Water- marking of Multimedia Contents II, pages 371–380. SPIE,
-
[34]
Wenda Li, Huijie Zhang, and Qing Qu. Shallow dif- fuse: Robust and invisible watermarking through low- dimensional subspaces in diffusion models.arXiv preprint arXiv:2410.21088, 2024. 2, 3, 6
-
[35]
Towards photo- realistic visible watermark removal with conditional gener- ative adversarial networks
Xiang Li, Chan Lu, Danni Cheng, Wei-Hong Li, Mei Cao, Bo Liu, Jiechao Ma, and Wei-Shi Zheng. Towards photo- realistic visible watermark removal with conditional gener- ative adversarial networks. InInternational conference on image and graphics, pages 345–356. Springer, 2019. 2
2019
-
[36]
The World’s Smartest Artificial In- telligence Just Made Its First Magazine Cover
Gloria Liu. The World’s Smartest Artificial In- telligence Just Made Its First Magazine Cover. https://www.cosmopolitan.com/lifestyle/a40314356/dall-e- 2-artificial-intelligence-cover/, 2022. 1
2022
-
[37]
Adaptive text watermark for large language models.arXiv preprint arXiv:2401.13927,
Yepeng Liu and Yuheng Bu. Adaptive text watermark for large language models.arXiv preprint arXiv:2401.13927,
-
[38]
Yepeng Liu, Yiren Song, Hai Ci, Yu Zhang, Haofan Wang, Mike Zheng Shou, and Yuheng Bu. Image watermarks are removable using controllable regeneration from clean noise. arXiv preprint arXiv:2410.05470, 2024. 2
-
[39]
Sora: A Review on Background, Technology, Limitations, and Opportunities of Large Vision Models, 2024
Yixin Liu, Kai Zhang, Yuan Li, Zhiling Yan, Chujie Gao, Ruoxi Chen, Zhengqing Yuan, Yue Huang, Hanchi Sun, Jian- feng Gao, Lifang He, and Lichao Sun. Sora: A Review on Background, Technology, Limitations, and Opportunities of Large Vision Models, 2024. 1
2024
-
[40]
Leveraging optimization for adaptive attacks on image watermarks.arXiv preprint arXiv:2309.16952,
Nils Lukas, Abdulrahman Diaa, Lucas Fenaux, and Florian Kerschbaum. Leveraging optimization for adaptive attacks on image watermarks.arXiv preprint arXiv:2309.16952,
-
[41]
Black-box forgery attacks on semantic watermarks for diffusion models
Andreas M ¨uller, Denis Lukovnikov, Jonas Thietke, Asja Fis- cher, and Erwin Quiring. Black-box forgery attacks on se- mantic watermarks for diffusion models.arXiv preprint arXiv:2412.03283, 2024. 1, 2, 4, 5, 6, 12, 13
-
[42]
Diffusion models for adversarial purification.arXiv preprint arXiv:2205.07460, 2022
Weili Nie, Brandon Guo, Yujia Huang, Chaowei Xiao, Arash Vahdat, and Anima Anandkumar. Diffusion models for adversarial purification.arXiv preprint arXiv:2205.07460,
-
[43]
Mikhail Pautov, Danil Ivanov, Andrey V Galichin, Oleg Rogov, and Ivan Oseledets. Spread them apart: Towards robust watermarking of generated content.arXiv preprint arXiv:2502.07845, 2025. 2
-
[44]
SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis
Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas M ¨uller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion mod- els for high-resolution image synthesis.arXiv preprint arXiv:2307.01952, 2023. 12
work page internal anchor Pith review Pith/arXiv arXiv 2023
-
[45]
Learning transferable visual models from natural language supervi- sion
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervi- sion. InInternational conference on machine learning, pages 8748–8763. PmLR, 2021. 7
2021
-
[46]
Zero-Shot Text-to-Image Generation, 2021
Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea V oss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-Shot Text-to-Image Generation, 2021. 1
2021
-
[47]
High-resolution image syn- thesis with latent diffusion models, 2021
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj¨orn Ommer. High-resolution image syn- thesis with latent diffusion models, 2021. 1
2021
-
[48]
High-resolution image synthesis with latent diffusion models
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj ¨orn Ommer. High-resolution image synthesis with latent diffusion models. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684–10695, 2022. 3, 5
2022
-
[49]
Robustness of ai-image detec- tors: Fundamental limits and practical attacks,
Mehrdad Saberi, Vinu Sankar Sadasivan, Keivan Rezaei, Aounon Kumar, Atoosa Chegini, Wenxiao Wang, and Soheil Feizi. Robustness of ai-image detectors: Fundamental lim- its and practical attacks.arXiv preprint arXiv:2310.00076,
-
[50]
Stable-diffusion-prompts, 2024
Gustavo Santana. Stable-diffusion-prompts, 2024. 5
2024
-
[51]
Denoising Diffusion Implicit Models
Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models.arXiv preprint arXiv:2010.02502, 2020. 2
work page internal anchor Pith review Pith/arXiv arXiv 2010
-
[52]
Score-Based Generative Modeling through Stochastic Differential Equations
Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Ab- hishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equa- tions.arXiv preprint arXiv:2011.13456, 2020. 2
work page internal anchor Pith review Pith/arXiv arXiv 2011
-
[53]
Diffmark: Diffusion-based robust watermark against deepfakes.arXiv preprint arXiv:2507.01428, 2025
Chen Sun, Haiyang Sun, Zhiqing Guo, Yunfeng Diao, Liejun Wang, Dan Ma, Gaobo Yang, and Keqin Li. Diffmark: Diffusion-based robust watermark against deepfakes.arXiv preprint arXiv:2507.01428, 2025. 2
-
[54]
Stegastamp: Invisible hyperlinks in physical photographs
Matthew Tancik, Ben Mildenhall, and Ren Ng. Stegastamp: Invisible hyperlinks in physical photographs. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2117–2126, 2020. 2
2020
-
[55]
Attacks on digi- tal watermarks: classification, estimation based attacks, and benchmarks.IEEE communications Magazine, 39(8):118– 126, 2001
Sviatoslav V oloshynovskiy, Shelby Pereira, Thierry Pun, Joachim J Eggers, and Jonathan K Su. Attacks on digi- tal watermarks: classification, estimation based attacks, and benchmarks.IEEE communications Magazine, 39(8):118– 126, 2001. 2
2001
-
[56]
PT-Mark: Invisible Watermarking for Text-to-image Diffusion Models via Semantic-aware Pivotal Tuning, 2025
Yaopeng Wang, Huiyu Xu, Zhibo Wang, Jiacheng Du, Zhichao Li, Yiming Li, Qiu Wang, and Kui Ren. PT-Mark: Invisible Watermarking for Text-to-image Diffusion Models via Semantic-aware Pivotal Tuning, 2025. 1, 2
2025
-
[57]
Image quality assessment: from error visibility to structural similarity.IEEE transactions on image processing, 13(4):600–612, 2004
Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Si- moncelli. Image quality assessment: from error visibility to structural similarity.IEEE transactions on image processing, 13(4):600–612, 2004. 7
2004
-
[58]
Tree-rings watermarks: Invisible fingerprints for diffusion images
Yuxin Wen, John Kirchenbauer, Jonas Geiping, and Tom Goldstein. Tree-rings watermarks: Invisible fingerprints for diffusion images. InThirty-Seventh Conference on Neural Information Processing Systems, 2023. 1, 2, 3, 5, 6, 7, 12
2023
-
[59]
Debiasing watermarks for large language models via maximal coupling.Journal of the American Sta- tistical Association, (just-accepted):1–21, 2025
Yangxinyu Xie, Xiang Li, Tanwi Mallick, Weijie Su, and Ruixun Zhang. Debiasing watermarks for large language models via maximal coupling.Journal of the American Sta- tistical Association, (just-accepted):1–21, 2025. 2
2025
-
[60]
Can simple averaging defeat modern watermarks? InThe Thirty- Eighth Annual Conference on Neural Information Process- ing Systems, 2024
Pei Yang, Hai Ci, Yiren Song, and Mike Zheng Shou. Can simple averaging defeat modern watermarks? InThe Thirty- Eighth Annual Conference on Neural Information Process- ing Systems, 2024. 1, 2, 4, 5, 6, 13
2024
-
[61]
Gaussian shading: Prov- able performance-lossless image watermarking for diffusion models
Zijin Yang, Kai Zeng, Kejiang Chen, Han Fang, Weim- ing Zhang, and Nenghai Yu. Gaussian shading: Prov- able performance-lossless image watermarking for diffusion models. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12162–12171, 2024. 1, 2, 3, 6
2024
-
[62]
Zijin Yang, Xin Zhang, Kejiang Chen, Kai Zeng, Qiyi Yao, Han Fang, Weiming Zhang, and Nenghai Yu. Gaussian shading++: Rethinking the realistic deployment challenge of performance-lossless image watermark for diffusion models. arXiv preprint arXiv:2504.15026, 2025. 2
-
[63]
Adversar- ial purification with score-based generative models
Jongmin Yoon, Sung Ju Hwang, and Juho Lee. Adversar- ial purification with score-based generative models. InIn- ternational Conference on Machine Learning, pages 12062– 12072. PMLR, 2021. 2
2021
-
[64]
Robust invisible video watermarking with attention.arXiv preprint arXiv:1909.01285, 2019
Kevin Alex Zhang, Lei Xu, Alfredo Cuesta-Infante, and Kalyan Veeramachaneni. Robust invisible video watermark- ing with attention.arXiv preprint arXiv:1909.01285, 2019. 2
-
[65]
Lu Zhang and Liang Zeng. Sat-ldm: Provably generalizable image watermarking for latent diffusion models with self- augmented training.arXiv preprint arXiv:2501.00463, 2024. 2
-
[66]
Attack-resilient image watermarking using stable diffusion
Lijun Zhang, Xiao Liu, Antoni Viros i Martin, Cindy Xiong Bearfield, Yuriy Brun, and Hui Guan. Attack-resilient image watermarking using stable diffusion. InThe Thirty-Eighth Annual Conference on Neural Information Processing Sys- tems, 2024. 1, 2, 3, 6
2024
-
[67]
The unreasonable effectiveness of deep features as a perceptual metric
Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shecht- man, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. InProceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 586–595, 2018. 7
2018
- [68]
-
[69]
Provable robust watermarking for AI-generated text.arXiv preprint arXiv:2306.17439, 2023
Xuandong Zhao, Prabhanjan Ananth, Lei Li, and Yu-Xiang Wang. Provable robust watermarking for ai-generated text. arXiv preprint arXiv:2306.17439, 2023. 2
-
[70]
Invisible image watermarks are provably removable using generative ai.Advances in neural information processing systems, 37:8643–8672, 2024
Xuandong Zhao, Kexun Zhang, Zihao Su, Saastha Vasan, Ilya Grishchenko, Christopher Kruegel, Giovanni Vigna, Yu- Xiang Wang, and Lei Li. Invisible image watermarks are provably removable using generative ai.Advances in neural information processing systems, 37:8643–8672, 2024. 8
2024
-
[71]
A recipe for watermarking diffusion models,
Yunqing Zhao, Tianyu Pang, Chao Du, Xiao Yang, Ngai- Man Cheung, and Min Lin. A recipe for watermarking dif- fusion models.arXiv preprint arXiv:2303.10137, 2023. 2
-
[72]
Tabularmark: Watermarking tabular datasets for machine learning
Yihao Zheng, Haocheng Xia, Junyuan Pang, Jinfei Liu, Kui Ren, Lingyang Chu, Yang Cao, and Li Xiong. Tabularmark: Watermarking tabular datasets for machine learning. InPro- ceedings of the 2024 on ACM SIGSAC conference on com- puter and communications security, pages 3570–3584, 2024. 2
2024
-
[73]
Provable wa- termarking for data poisoning attacks.arXiv preprint arXiv:2510.09210, 2025
Yifan Zhu, Lijia Yu, and Xiao-Shan Gao. Provable wa- termarking for data poisoning attacks.arXiv preprint arXiv:2510.09210, 2025. 2 A. Appendix of Paper ”Towards Robust Content Watermarking Against Removal and Forgery Attacks” A.1. Experimental Details Watermarking hyperparameter assignment.After gaining the labely c∈[C]forC-classes clustering, we need to...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.