pith. machine review for the scientific record. sign in

arxiv: 2605.01479 · v1 · submitted 2026-05-02 · 💻 cs.CV

Recognition: unknown

CSGuard: Toward Forgery-Resistant Watermarking in Diffusion Models via Compressed Sensing Constraint

Chen Tang, Hui Jin, Jiewei Lai, Lan Zhang, Pengcheng Sun, Yunhao Wang, Zhaopeng Zhang

Pith reviewed 2026-05-09 14:40 UTC · model grok-4.3

classification 💻 cs.CV
keywords watermarkingdiffusion modelsforgery resistancecompressed sensinglatent spacedigital forensicsimage attribution
0
0 comments X

The pith

A secret matrix tied to compressed sensing makes diffusion model watermarks forgery-resistant by blocking inversion-regeneration attacks.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Existing latent watermarking for diffusion models lets attackers invert a watermarked image and regenerate it with any prompt to create false attribution on new content. CSGuard adds a compressed sensing constraint that requires the watermarked latent to obey a linear measurement defined by a secret matrix. Only the matrix owner can embed or verify correctly because regeneration without the matrix violates the constraint. The approach keeps image quality and legitimate detection rates intact while cutting forgery success sharply. If the binding holds, it supplies a practical way to attribute AI-generated images reliably for IP protection and forensics.

Core claim

CSGuard is the first forgery-resistant watermarking schema that leverages compressed sensing to bind the watermarked image generation and verification to a secret matrix. This ensures that only users possessing the secret matrix can correctly embed or verify the image watermark, prevents the illegal users from forgery without compromising generation quality and watermark integrity.

What carries the argument

The compressed sensing constraint, a linear measurement equation that the watermarked latent representation must satisfy using the secret matrix, which makes successful verification impossible unless the matrix is known during both embedding and checking.

If this is right

  • Forgery attack success rate falls from 100 percent to 28.12 percent.
  • All benign watermarked images continue to be detected correctly.
  • Generated image quality and original watermark strength remain unchanged.
  • Embedding and verification are possible only for holders of the secret matrix.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same matrix-binding idea could be tested on other latent generative models such as GANs or autoregressive image generators.
  • Periodic rotation of the secret matrix would limit damage if one instance is ever compromised.
  • Real-time platforms could adopt the check to flag unattributable or falsely attributed images before they spread.

Load-bearing premise

An attacker cannot recover or approximate the secret matrix from watermarked images or queries, so the constraint stays binding when the image is inverted and regenerated.

What would settle it

An attacker recovers a usable approximation of the secret matrix from several watermarked images and then forges new content that passes verification at a rate well above 28 percent.

Figures

Figures reproduced from arXiv: 2605.01479 by Chen Tang, Hui Jin, Jiewei Lai, Lan Zhang, Pengcheng Sun, Yunhao Wang, Zhaopeng Zhang.

Figure 1
Figure 1. Figure 1: Overview of CSGuard. By enforcing CS consistency on intermediate latents, CSGuard view at source ↗
Figure 2
Figure 2. Figure 2: Watermark performance under varying image distortions of different intensities. view at source ↗
Figure 3
Figure 3. Figure 3: Ablation study on the impact of observa view at source ↗
Figure 4
Figure 4. Figure 4: Watermark performance under varying CS recovery ratios and projection ratios. Top: Projection ratio fixed at 0.4, CS ratio varied. Bottom: CS ratio fixed at 0.8, projection ratio varied. Analysis on Projection Ratio. We an￾alyze the effect of the projection ratio (proj_ratio), which governs the Tproj , with results shown in view at source ↗
Figure 5
Figure 5. Figure 5: Comparative visualization of watermarking results. For each prompt (bottom), the left view at source ↗
Figure 6
Figure 6. Figure 6: TPR/ASR for benign and forged images under varying false positive rate (FPR) thresholds. view at source ↗
Figure 7
Figure 7. Figure 7: TPR/ASR for benign and forged images under varying matries. view at source ↗
read the original abstract

Latent-based diffusion model watermarking embeds watermarks into generated images' latent space to enable content attribution, offering a training-free solution for intellectual property protection and digital forensics. However, these methods exhibit a critical vulnerability to the forgery attack, attackers can extract the watermark by inverting the watermarked image and re-generating it with an arbitrary prompt, thereby enabling false attribution on malicious content. In this paper, we propose the CSGuard, the first forgery-resistant watermarking schema that leverages compressed sensing to bind the watermarked image generation and verification to a secret matrix. This ensures that only users possessing the secret matrix can correctly embed or verify the image watermark, prevents the illegal users from forgery without compromising generation quality and watermark integrity. Experimental results demonstrate that CSGuard achieves strong forgery resistance, reduces the attack success rate from 100.0\% to 28.12\%, and achieve 100\% detection rate on benign watermarked images without compromising watermarking effectiveness.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper proposes CSGuard, a training-free watermarking scheme for latent diffusion models that embeds watermarks in latent space while adding a compressed-sensing constraint tied to a secret sensing matrix. This binding is intended to prevent forgery attacks in which an adversary inverts a watermarked image and regenerates it under an arbitrary prompt. The central empirical claim is that the scheme reduces attack success rate from 100% to 28.12% while achieving 100% detection on benign watermarked images and preserving generation quality.

Significance. If the forgery-resistance property holds under realistic key-recovery attacks, the work would address a documented vulnerability in existing latent-space watermarking methods and provide a concrete mechanism for content attribution in generative AI. The compressed-sensing formulation offers a principled way to tie embedding and verification to a secret without retraining the diffusion model.

major comments (3)
  1. [§3] §3 (method): The forgery-resistance argument rests on the assumption that an attacker cannot recover or sufficiently approximate the secret matrix A from watermarked outputs or multiple queries. No security reduction, key-recovery analysis, or bound on the number of samples needed for matrix estimation (e.g., via least-squares on observed latents) is provided, leaving the central claim dependent on an unverified assumption.
  2. [§4] §4 (experiments): The reported attack-success rate of 28.12% and 100% benign detection rate are presented without experimental protocol details such as dataset sizes, number of trials, attack prompt distributions, or statistical confidence intervals. This absence prevents independent verification of the quantitative claims.
  3. [§4.2] §4.2 (attack evaluation): No experiments evaluate the scheme against matrix-estimation attacks (e.g., compressed-sensing recovery or linear regression on multiple watermarked latents). Such tests are load-bearing for the claim that the compressed-sensing constraint remains binding after inversion-plus-regeneration.
minor comments (2)
  1. [§3] Notation for the compressed-sensing constraint (presumably ||y - A x||) should be stated explicitly with dimensions of A and the precise embedding/verification equations.
  2. [Abstract] The abstract uses 'schema' where 'scheme' is the conventional term; this should be corrected for clarity.

Simulated Author's Rebuttal

3 responses · 1 unresolved

We thank the referee for the constructive and detailed comments on our manuscript. We address each major comment point by point below and indicate the revisions we intend to make.

read point-by-point responses
  1. Referee: [§3] §3 (method): The forgery-resistance argument rests on the assumption that an attacker cannot recover or sufficiently approximate the secret matrix A from watermarked outputs or multiple queries. No security reduction, key-recovery analysis, or bound on the number of samples needed for matrix estimation (e.g., via least-squares on observed latents) is provided, leaving the central claim dependent on an unverified assumption.

    Authors: We agree that the current manuscript lacks a formal security reduction or explicit key-recovery analysis. The CSGuard approach binds the watermark via the compressed-sensing constraint tied to the secret matrix A, so that verification and correct embedding require knowledge of A. In the revised version we will add a dedicated discussion subsection in §3 that references standard compressed-sensing recovery bounds (e.g., the number of measurements required for stable recovery of sparse signals) and explains why accurate estimation of A from a modest number of watermarked latents is computationally prohibitive in the high-dimensional latent space of diffusion models. We will also explicitly state the assumption and its limitations. revision: partial

  2. Referee: [§4] §4 (experiments): The reported attack-success rate of 28.12% and 100% benign detection rate are presented without experimental protocol details such as dataset sizes, number of trials, attack prompt distributions, or statistical confidence intervals. This absence prevents independent verification of the quantitative claims.

    Authors: We acknowledge the omission of detailed experimental protocols. In the revised manuscript we will expand §4 to report the exact dataset sizes and sources, the number of independent trials performed for each metric, the distribution and examples of attack prompts used, and statistical confidence intervals (or standard deviations) for the 28.12% attack-success rate and 100% benign-detection rate. revision: yes

  3. Referee: [§4.2] §4.2 (attack evaluation): No experiments evaluate the scheme against matrix-estimation attacks (e.g., compressed-sensing recovery or linear regression on multiple watermarked latents). Such tests are load-bearing for the claim that the compressed-sensing constraint remains binding after inversion-plus-regeneration.

    Authors: We agree that evaluating resilience to matrix-estimation attacks is important for substantiating the forgery-resistance claim. The current experiments focus on the practical inversion-plus-regeneration attack. In the revision we will add new experiments in §4.2 that simulate an attacker collecting multiple watermarked latents and attempting to recover or approximate A via least-squares regression or standard compressed-sensing recovery algorithms; we will then measure the resulting forgery success rate when the approximated matrix is used for regeneration. revision: yes

standing simulated objections not resolved
  • A complete cryptographic security reduction proving resistance to all possible adaptive key-recovery attacks; such a reduction would require substantial additional theoretical development beyond the scope of the present empirical study.

Circularity Check

0 steps flagged

No significant circularity; central construction is externally grounded and empirically evaluated.

full rationale

The paper introduces CSGuard by binding watermark embedding/verification to a secret sensing matrix A via a compressed-sensing constraint in the latent space of diffusion models. This construction is presented as a design choice that makes forgery (inversion + regeneration) fail without A, with experimental results showing attack success dropping to 28.12% while preserving 100% benign detection. No equations, derivations, or self-citations reduce the claimed resistance to a fitted parameter defined by the result itself, nor do they rename a known pattern or smuggle an ansatz. The secret-matrix premise is treated as an independent input (standard in keyed watermarking), and the forgery-resistance claim rests on empirical measurement rather than a tautological reduction. The derivation chain is therefore self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The central claim rests on the unstated premise that the secret matrix cannot be recovered by an attacker who only sees watermarked outputs and that the compressed-sensing step survives the inversion attack; no free parameters or new physical entities are introduced beyond the matrix itself.

axioms (1)
  • domain assumption Latent inversion of a watermarked diffusion image followed by re-generation with an arbitrary prompt is a feasible and effective forgery attack.
    Invoked in the problem statement to motivate the need for forgery resistance.
invented entities (1)
  • Secret matrix used inside the compressed-sensing constraint no independent evidence
    purpose: To bind watermark embedding and verification so that only holders of the matrix succeed
    Introduced as the core mechanism; no independent evidence or external validation supplied in the abstract.

pith-pipeline@v0.9.0 · 5481 in / 1263 out tokens · 31599 ms · 2026-05-09T14:40:09.200656+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

51 extracted references · 28 canonical work pages · 1 internal anchor

  1. [1]

    Arabi, B

    K. Arabi, B. Feuer, R. T. Witter, C. Hegde, and N. Cohen. Hidden in the noise: Two-stage robust watermarking for images. In The Thirteenth International Conference on Learning Representations, ICLR 2025, Singapore, April 24-28, 2025. OpenReview.net, 2025. URL https://openreview.net/forum?id=ll2nz6qwRG

  2. [2]

    Asnani, J

    V . Asnani, J. P. Collomosse, T. Bui, X. Liu, and S. Agarwal. Promark: Proactive diffusion watermarking for causal attribution. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, Seattle, W A,USA, June 16-22, 2024, pages 10802–10811. IEEE,

  3. [3]

    T-VSL: text-guided visual sound source localization in mixtures

    doi: 10.1109/CVPR52733.2024.01027. URL https://doi.org/10.1109/CVPR5273 3.2024.01027

  4. [4]

    B. Chen, Z. Zhang, W. Li, C. Zhao, J. Yu, S. Zhao, J. Chen, and J. Zhang. Invertible diffusion models for compressed sensing. IEEE Trans. Pattern Anal. Mach. Intell., 47(5):3992–4006,

  5. [5]

    URL https://doi.org/10.1109/TPAMI.2025 .3538896

    doi: 10.1109/TPAMI.2025.3538896. URL https://doi.org/10.1109/TPAMI.2025 .3538896

  6. [6]

    J. Chen, J. Yu, C. Ge, L. Yao, E. Xie, Z. Wang, J. T. Kwok, P. Luo, H. Lu, and Z. Li. Pixart-α: Fast training of diffusion transformer for photorealistic text-to-image synthesis. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11,

  7. [7]

    URLhttps://openreview.net/forum?id=eAKmQPe3m1

    OpenReview.net, 2024. URLhttps://openreview.net/forum?id=eAKmQPe3m1

  8. [8]

    H. Ci, P. Yang, Y . Song, and M. Z. Shou. Ringid: Rethinking tree-ring watermarking for enhanced multi-key identification. In A. Leonardis, E. Ricci, S. Roth, O. Russakovsky, T. Sattler, and G. Varol, editors,Computer Vision - ECCV 2024 - 18th European Conference, Milan, Italy, September 29-October 4, 2024, Proceedings, Part XXVIII, volume 15086 of Lectur...

  9. [9]

    H. Ci, Y . Song, P. Yang, J. Xie, and M. Z. Shou. Wmadapter: Adding watermark control to latent diffusion models. In Forty-second International Conference on Machine Learning, ICML 2025, Vancouver,BC, Canada, July 13-19, 2025. OpenReview.net, 2025. URL https: //openreview.net/forum?id=xXYGBmpMAj

  10. [10]

    J. Deng, C. Lin, Z. Zhao, S. Liu, Z. Peng, Q. Wang, and C. Shen. A survey of defenses against ai- generated visual media: Detection, disruption, and authentication. ACM Comput. Surv., 58(5): 123:1–123:35, 2026. doi: 10.1145/3770916. URLhttps://doi.org/10.1145/3770916

  11. [11]

    Z. Dong, C. Shuai, Z. Ba, P. Cheng, Z. Qin, Q. Wang, and K. Ren. Wmcopier: Forging invisible image watermarks on arbitrary images. arXiv preprint arXiv:2503.22330, 2025

  12. [12]

    D. L. Donoho. Compressed sensing. IEEE Trans. Inf. Theory, 52(4):1289–1306, 2006. doi: 10.1109/TIT.2006.871582. URLhttps://doi.org/10.1109/TIT.2006.871582

  13. [13]

    H. Fang, K. Chen, Y . Qiu, Z. Ma, W. Zhang, and E. Chang. DERO: diffusion-model-erasure robust watermarking. In J. Cai, M. S. Kankanhalli, B. Prabhakaran, S. Boll, R. Subramanian, L. Zheng, V . K. Singh, P. César, L. Xie, and D. Xu, editors, Proceedings of the 32nd ACM International Conference on Multimedia, MM 2024, Melbourne, VIC, Australia, 28 October ...

  14. [14]

    H. Fang, K. Chen, Z. Yang, B. Cui, W. Zhang, and E. Chang. Cosda: Enhancing the robustness of inversion-based generative image watermarking framework. In T. Walsh, J. Shah, and Z. Kolter, editors, AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25 - March 4, 2025, Philadelphia, PA, USA, pages 2888–2896. AAAI ...

  15. [15]

    J. Fei, Y . Dai, Z. Xia, F. Huang, and J. Zhou. Omnimark: Efficient and scalable latent diffusion model fingerprinting. In T. Walsh, J. Shah, and Z. Kolter, editors, AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25 - March 4, 2025, Philadelphia, PA, USA, pages 16550–16558. AAAI Press, 2025. doi: 10.1609/AAA...

  16. [16]

    W. Feng, W. Zhou, J. He, J. Zhang, T. Wei, G. Li, T. Zhang, W. Zhang, and N. Yu. Aqualora: Toward white-box protection for customized stable diffusion models via watermark lora. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024. URL https://openreview.net/forum?id=8xKGZs nV2a

  17. [17]

    A low- shot object counting network with iterative prototype adaptation, in: ICCV, Paris, France, October 1-6, 2023, IEEE

    P. Fernandez, G. Couairon, H. Jégou, M. Douze, and T. Furon. The stable signature: Rooting watermarks in latent diffusion models. In IEEE/CVF International Conference on Computer Vision, ICCV 2023, Paris, France, October 1-6, 2023, pages 22409–22420. IEEE, 2023. doi: 10.1109/ICCV51070.2023.02053. URL https://doi.org/10.1109/ICCV51070.2023.0 2053

  18. [18]

    T-VSL: text-guided visual sound source localization in mixtures

    T. Garber and T. Tirer. Image restoration by denoising diffusion models with iteratively pre- conditioned guidance. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, Seattle, W A,USA, June 16-22, 2024, pages 25245–25254. IEEE, 2024. doi: 10.1109/CVPR52733.2024.02385. URL https://doi.org/10.1109/CVPR52733.2024.0 2385

  19. [19]

    S. Gunn, X. Zhao, and D. Song. An undetectable watermark for generative image models. In The Thirteenth International Conference on Learning Representations, ICLR 2025, Singapore, April 24-28, 2025. OpenReview.net, 2025. URL https://openreview.net/forum?id=jl hBFm7T2J

  20. [20]

    C. Hang, Z. Ma, H. Chen, X. Fang, V . Xie, F. Fang, G. Zhang, and H. Wang. Exploring fixed point in image editing: Theoretical support and convergence optimization. In A. Globersons, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. M. Tomczak, and C. Zhang, editors,Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information ...

  21. [21]

    URL http://papers.nips.cc/paper_files/paper/2024/hash/23c32cb7ac3 97f612b7c16aaa2bf0340-Abstract-Conference.html

  22. [22]

    J. Ho, A. Jain, and P. Abbeel. Denoising diffusion probabilistic models. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips. cc/paper...

  23. [23]

    T. Hu, F. Chen, H. Wang, J. Li, W. Wang, J. Sun, and Z. Li. Complexity matters: Rethinking the latent space for generative modeling. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orlean...

  24. [24]

    C. Kim, K. Min, M. Patel, S. Cheng, and Y . Yang. WOUAF: weight modulation for user attribution and fingerprinting in text-to-image diffusion models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, Seattle, W A,USA, June 16-22, 2024, pages 8974–8983. IEEE, 2024. doi: 10.1109/CVPR52733.2024.00857. URL https: //doi.org/10.1109/...

  25. [25]

    J. Lai, L. Zhang, C. Tang, and P. Sun. Efficient and accurate image provenance analysis: A scalable pipeline for large-scale images. CoRR, abs/2506.23707, 2025. doi: 10.48550/ARXIV .2506.23707. URLhttps://doi.org/10.48550/arXiv.2506.23707

  26. [26]

    J. Lai, L. Zhang, C. Tang, P. Sun, X. Wang, and Y . Wang. Untraceable deepfakes via traceable fingerprint elimination. CoRR, abs/2508.03067, 2025. doi: 10.48550/ARXIV.2508.03067. URLhttps://doi.org/10.48550/arXiv.2508.03067

  27. [27]

    K. Li, Z. Huang, X. Hou, and C. Hong. Gaussmarker: Robust dual-domain watermark for diffusion models. In Forty-second International Conference on Machine Learning, ICML 2025, Vancouver,BC, Canada, July 13-19, 2025. OpenReview.net, 2025. URL https://openrevi ew.net/forum?id=m7Mx14cxv8. 11

  28. [28]

    Q. Liu, Y . Zhang, Z. Ba, C. Shuai, P. Cheng, T. Zheng, and Z. Wang. Attack-resistant watermark- ing for AIGC image forensics via diffusion-based semantic deflection. CoRR, abs/2601.06639,

  29. [29]

    URL https://doi.org/10.48550/arXiv.260 1.06639

    doi: 10.48550/ARXIV.2601.06639. URL https://doi.org/10.48550/arXiv.260 1.06639

  30. [30]

    W. Luo, Z. Shen, Y . Yao, F. Ding, G. Zhu, and W. Meng. Tracemark-ldm: Authenticatable water- marking for latent diffusion models via binary-guided rearrangement. CoRR, abs/2503.23332,

  31. [31]

    URL https://doi.org/10.48550/arXiv.250 3.23332

    doi: 10.48550/ARXIV.2503.23332. URL https://doi.org/10.48550/arXiv.250 3.23332

  32. [32]

    Patel, and Shao-Yuan Lo

    A. Müller, D. Lukovnikov, J. Thietke, A. Fischer, and E. Quiring. Black-box forgery attacks on semantic watermarks for diffusion models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2025, Nashville, TN, USA, June 11-15, 2025, pages 20937–20946. Computer Vision Foundation / IEEE, 2025. doi: 10.1109/CVPR52734.2025.01950. URL https...

  33. [33]

    W. Nie, B. Guo, Y . Huang, C. Xiao, A. Vahdat, and A. Anandkumar. Diffusion models for adversarial purification. In K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvári, G. Niu, and S. Sabato, editors, International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, Proceedings of Machine Learning Research, pages 16805– 1682...

  34. [34]

    Podell, Z

    D. Podell, Z. English, K. Lacey, A. Blattmann, T. Dockhorn, J. Müller, J. Penna, and R. Rombach. SDXL: improving latent diffusion models for high-resolution image synthesis. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11,

  35. [35]

    URLhttps://openreview.net/forum?id=di52zR8xgf

    OpenReview.net, 2024. URLhttps://openreview.net/forum?id=di52zR8xgf

  36. [36]

    Radford, J

    A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, G. Krueger, and I. Sutskever. Learning transferable visual models from natural language supervision. In M. Meila and T. Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event...

  37. [37]

    Shadows can be

    R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer. High-resolution image synthesis with latent diffusion models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 10674– 10685. IEEE, 2022. doi: 10.1109/CVPR52688.2022.01042. URL https://doi.org/10.110 9/CVPR52688.2022.01042

  38. [38]

    Transferable black-box one-shot forging of watermarks via image preference models.CoRR, abs/2510.20468, 2025

    T. Soucek, S. Rebuffi, P. Fernandez, N. Jovanovic, H. Elsahar, V . Lacatusu, T. Tran, and A. Mourachko. Transferable black-box one-shot forging of watermarks via image preference models. CoRR, abs/2510.20468, 2025. doi: 10.48550/ARXIV.2510.20468. URL https: //doi.org/10.48550/arXiv.2510.20468

  39. [39]

    Y . Wang, Y . Hu, J. Yu, and J. Zhang. GAN prior based null-space learning for consistent super-resolution. In B. Williams, Y . Chen, and J. Neville, editors, Thirty-Seventh AAAI Conference on Artificial Intelligence, AAAI 2023, Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence, IAAI 2023, Thirteenth Symposium on Educational Ad...

  40. [40]

    Y . Wang, J. Yu, and J. Zhang. Zero-shot image restoration using denoising diffusion null-space model. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. URL https://openreview.net/f orum?id=mRieQgMtNTQ

  41. [41]

    Z. Wang, J. Guo, J. Zhu, Y . Li, H. Huang, M. Chen, and Z. Tu. Sleepermark: Towards robust watermark against fine-tuning text-to-image diffusion models. In IEEE/CVF Conference 12 on Computer Vision and Pattern Recognition, CVPR 2025, Nashville, TN, USA, June 11-15, 2025, pages 8213–8224. Computer Vision Foundation / IEEE, 2025. doi: 10.1109/CVPR52734. 202...

  42. [42]

    Y . Wen, J. Kirchenbauer, J. Geiping, and T. Goldstein. Tree-rings watermarks: Invisible finger- prints for diffusion images. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors,Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, U...

  43. [43]

    Xiong, C

    C. Xiong, C. Qin, G. Feng, and X. Zhang. Flexible and secure watermarking for latent diffusion model. In A. El-Saddik, T. Mei, R. Cucchiara, M. Bertini, D. P. T. Vallejo, P. K. Atrey, and M. S. Hossain, editors, Proceedings of the 31st ACM International Conference on Multimedia, MM 2023, Ottawa, ON, Canada, 29 October 2023- 3 November 2023, pages 1668–1676. ACM,

  44. [44]

    URL https://doi.org/10.1145/3581783.3612 448

    doi: 10.1145/3581783.3612448. URL https://doi.org/10.1145/3581783.3612 448

  45. [45]

    L. Yang, Z. Zhang, Y . Song, S. Hong, R. Xu, Y . Zhao, W. Zhang, B. Cui, and M. Yang. Diffusion models: A comprehensive survey of methods and applications. ACM Comput. Surv., 56(4): 105:1–105:39, 2024. doi: 10.1145/3626235. URLhttps://doi.org/10.1145/3626235

  46. [46]

    Z. Yang, K. Zeng, K. Chen, H. Fang, W. Zhang, and N. Yu. Gaussian shading: Provable performance-lossless image watermarking for diffusion models. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, Seattle, W A,USA, June 16-22, 2024, pages 12162–12171. IEEE, 2024. doi: 10.1109/CVPR52733.2024.01156. URL https://doi.org/10.1109/CVP...

  47. [47]

    N. Yu, V . Skripniuk, S. Abdelnabi, and M. Fritz. Artificial fingerprinting for generative models: Rooting deepfake attribution in training data. In 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, pages 14428– 14437. IEEE, 2021. doi: 10.1109/ICCV48922.2021.01418. URL https://doi.org/10.110 9...

  48. [48]

    X. Zhao, K. Zhang, Z. Su, S. Vasan, I. Grishchenko, C. Kruegel, G. Vigna, Y . Wang, and L. Li. Invisible image watermarks are provably removable using generative AI. In A. Globersons, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. M. Tomczak, and C. Zhang, editors,Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information...

  49. [49]

    URL http://papers.nips.cc/paper_files/paper/2024/hash/10272bfd037 1ef960ec557ed6c866058-Abstract-Conference.html

  50. [50]

    D. Zhu, Y . Li, B. Wu, J. Zhou, Z. Wang, and S. Lyu. Hiding faces in plain sight: Defending deepfakes by disrupting face detection. IEEE Trans. Dependable Secur. Comput., 22(6):7010– 7024, 2025. doi: 10.1109/TDSC.2025.3592230. URL https://doi.org/10.1109/TDSC.2 025.3592230. A Appendix This appendix provides supplementary materials that support the main cl...

  51. [51]

    and verification (Algorithm 2) are also provided. • Appendix A.3 (Security Analysis of the Secret Matrix): We formally analyze the security of the secret projection matrix A, including key generation guarantees, space complexity, and resistance against brute-force attacks. •Appendix A.4 (Details of Experiments): – A.4.1: Comprehensive experimental setting...