pith. machine review for the scientific record. sign in

arxiv: 2604.11720 · v1 · submitted 2026-04-13 · 💻 cs.CV · cs.AI· cs.CR

Recognition: unknown

On the Robustness of Watermarking for Autoregressive Image Generation

Authors on Pith no claims yet

Pith reviewed 2026-05-10 16:09 UTC · model grok-4.3

classification 💻 cs.CV cs.AIcs.CR
keywords autoregressive image generationwatermarkingrobustnessremoval attacksforgery attackssynthetic content detectionwatermark mimicrydataset filtering
0
0 comments X

The pith

Watermarking schemes for autoregressive image generators fail against removal and forgery attacks that require only a single reference image.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper tests watermarking methods built for autoregressive image generators, which embed a signal at generation time so that a detector can later identify synthetic outputs. It shows these signals can be stripped away by regeneration, adversarial optimization, or frequency injection, and can also be copied onto real photographs to produce false positives. A reader would care because the techniques were meant to stop synthetic images from entering training sets and to help spot misinformation, yet the attacks succeed without the original model or secret keys. If the results hold, current watermarking cannot be trusted for either filtering or attribution in practice.

Core claim

Existing watermarking schemes for autoregressive image generation do not reliably support synthetic content detection for dataset filtering because removal and forgery attacks succeed with access to only one watermarked reference image and the detector; the schemes therefore enable Watermark Mimicry, in which authentic images are altered to imitate a generator's signal and block their own inclusion in future training data.

What carries the argument

The watermark embedding step inside the autoregressive generation process together with its matching detector, which the paper attacks by regenerating tokens, optimizing perturbations, or injecting frequencies to erase or copy the signal.

If this is right

  • Watermarked images can have their signals removed, allowing synthetic content to pass undetected into training datasets.
  • Real images can be edited to carry a false watermark, causing detectors to reject them and shrink available training data.
  • Dataset filtering pipelines that rely on these watermarks cannot guarantee exclusion of generated images.
  • Attribution of outputs to specific generators becomes unreliable once forgery is possible.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Developers may need to layer watermarking with statistical or fingerprinting checks that do not rely on a single embeddable signal.
  • The same regeneration and frequency attacks could be tested on non-autoregressive generators to see whether the weaknesses are architecture-specific.
  • Future embedding methods might need to tie the watermark more tightly to image content that survives regeneration steps.

Load-bearing premise

The attacks continue to work when an adversary has access only to the detector and one watermarked example, without the generator parameters or the embedding secrets.

What would settle it

A controlled test in which the three new attacks are applied to many different AR generators and the detector still correctly rejects all forged real images while accepting all genuine watermarked images would show the schemes are more robust than claimed.

Figures

Figures reproduced from arXiv: 2604.11720 by Andreas M\"uller, Anubhav Jain, Asja Fischer, Denis Lukovnikov, Jonathan Petit, Minh Pham, Niv Cohen, Shingo Kodama.

Figure 1
Figure 1. Figure 1: Watermark Mimicry subverts synthetic content filtering [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Token-based semantic watermarking. During generation (left), green to￾kens (e.g., T1) are boosted. During verification (right), a statistical test is performed based on the fraction of present green tokens to determine the presence of a watermark. 10 11 0 1 1 1 Generate Static red/green n-gram split 0 Green Set 01, 10 Red Set 00, 11 Encode Retrieve residuals and get bits 0 1 0 1 Verify Count green n-grams … view at source ↗
Figure 3
Figure 3. Figure 3: Watermarking for bitwise autoregressive models [PITH_FULL_IMAGE:figures/full_fig_p004_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: BitMark Forgery via Frequency Injection. [PITH_FULL_IMAGE:figures/full_fig_p008_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: LatentOpt-Removal (top) and LatentOpt-Forgery (bottom) results [PITH_FULL_IMAGE:figures/full_fig_p011_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Qualitative Eaxmples for BitMark deployed with Infinity-2B for removal at￾tacks (left) and forgery attacks (right). LatentOpt-R and LatentOpt-F attacks are capped by a budget of ∥∆x∥ ∞ = 8 255 . Black-box settings and the VQ-Regen attack use LlamaGen’s VQ-VAE and Infinity-2B’s VQ-VAE as proxy models, respectively. 0 10 20 30 40 50 60 70 80 Z-Score Watermarked Radioactive ( -2B, 50%) Radioactive (SD2.1, 50%… view at source ↗
Figure 1
Figure 1. Figure 1: Visual examples of perturbations and geometric transformations. VQ-Regen Algorithm and Effect of Different VQ-VAEs and Substi￾tution Ranks. Algorithm 3 provides a formal description of the VQ-Regen attack. Examples of VQ-Regen applied to watermarked images (generated by BitMark) are provided in [PITH_FULL_IMAGE:figures/full_fig_p023_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Visual examples of the VQ-Regen attack. The original watermarked image is generated using BitMark deployed with ∞-2B. Algorithm 3 Vector-Quantized Regeneration Attack (VQ-Regen) Require: Image x; encoder E; decoder D; codebook C ∈ R |V|×d ; substitution rank k with 1 ≤ k ≤ |V| 1: z ← E(x) ▷ z ∈ R d×h×w 2: Initialize t ′ ∈ Vh×w 3: for i ← 1 to h do 4: for j ← 1 to w do 5: u ← z:,i,j ∈ R d 6: s ← Argsort( [… view at source ↗
Figure 3
Figure 3. Figure 3: BitMark Forgery via Frequency Injection Settings. We take an authentic cover image, and then compute its FFT. We inject spectral components with magnitude α and a channel-wise random phase along the diagonals and spaced 32 pixels apart in both axes, directly overwriting the original coefficients. Finally, we apply inverse FFT to reconstruct the attacked image. Settings A and B are limited to the first four… view at source ↗
Figure 4
Figure 4. Figure 4: Total difference ∆ between green (G = {01, 10}) and red ((R = {00, 11})) bi￾grams counts across different scales for 100 real cover images (blue) and 100 frequency injection forgery attack instances (setting A, orange) as verified by the BitMark water￾marking schemes deployed with ∞-2B (left). At the finest generation scale (scale 12), the attack introduces a strong surplus of green bigrams G = 01, 10, cau… view at source ↗
Figure 5
Figure 5. Figure 5: Visual examples of VQ-Regen and diffusion regeneration-based attacks [PITH_FULL_IMAGE:figures/full_fig_p028_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Visual examples of the LatentOpt-Removal attack (c = 8 255 at step 300). Authentic Cover Image Frequency Injection Setting A Frequency Injection Setting B Frequency Injection Setting C LatentOpt LatentOpt LlamaGen VAE LatentOpt Anole VAE Taming VAE RAR-XL VAE ∞-2B VAE (Vd=2 16 ) ∞-2B VAE (Vd=2 24 ) ∞-2B VAE (Vd=2 32 ) ∞-2B VAE (Vd=2 64 ) [PITH_FULL_IMAGE:figures/full_fig_p029_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Visual examples of the Frequency Injection Forgery and LatentOpt￾Forgery (c = 8 255 at step 300) attacks on BitMark [PITH_FULL_IMAGE:figures/full_fig_p029_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: LatentOpt attacks for perturbation budgets c ∈ { 2 255 , 4 255 , 8 255 , 16 255 , 32 255 } [PITH_FULL_IMAGE:figures/full_fig_p030_8.png] view at source ↗
read the original abstract

The proliferation of autoregressive (AR) image generators demands reliable detection and attribution of their outputs to mitigate misinformation, and to filter synthetic images from training data to prevent model collapse. To address this need, watermarking techniques, specifically designed for AR models, embed a subtle signal at generation time, enabling downstream verification through a corresponding watermark detector. In this work, we study these schemes and demonstrate their vulnerability to both watermark removal and forgery attacks. We assess existing attacks and further introduce three new attacks: (i) a vector-quantized regeneration removal attack, (ii) adversarial optimization-based attack, and (iii) a frequency injection attack. Our evaluation reveals that removal and forgery attacks can be effective with access to a single watermarked reference image and without access to original model parameters or watermarking secrets. Our findings indicate that existing watermarking schemes for AR image generation do not reliably support synthetic content detection for dataset filtering. Moreover, they enable Watermark Mimicry, whereby authentic images can be manipulated to imitate a generator's watermark and trigger false detection to prevent their inclusion in future model training.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 3 minor

Summary. The paper evaluates the robustness of existing watermarking schemes for autoregressive (AR) image generators against removal and forgery attacks. It introduces three new attacks—vector-quantized (VQ) regeneration, adversarial optimization, and frequency injection—and shows that these can succeed using only a single watermarked reference image without access to generator parameters or watermark secrets. The central claims are that current AR watermarking does not reliably support synthetic content detection for dataset filtering and that the schemes enable 'Watermark Mimicry' attacks on authentic images.

Significance. If the empirical results hold under clearly specified threat models, the work is significant for AI safety and content authentication research. It provides concrete evidence of practical vulnerabilities in AR-specific watermarking, which could guide more robust designs to prevent training data contamination and misinformation. The introduction of multiple attack vectors with minimal access requirements is a constructive contribution, though its impact depends on clarifying the detector access assumptions.

major comments (2)
  1. [§4.2] §4.2 (Adversarial Optimization-based Attack): The attack description does not specify whether white-box (gradient) access to the watermark detector is required. Since the method relies on optimization to craft perturbations, this is load-bearing for the abstract's claim that attacks succeed 'without access to original model parameters or watermarking secrets'; if white-box detector access is implicitly assumed, the results do not fully support the broad conclusion that schemes 'do not reliably support synthetic content detection' in realistic black-box deployments. The relative success rates of the VQ regeneration and frequency injection attacks (which may be black-box) should be reported separately to isolate contributions.
  2. [§5] §5 (Evaluation): The experiments do not include an ablation on the number of reference watermarked images or a clear statement of the detector access model across all three attacks. This weakens the claim that attacks are effective 'with access to a single watermarked reference image,' as the success may depend on unstated assumptions about query access or gradient availability.
minor comments (3)
  1. [Figure 2] Figure 2: The attack pipeline diagram would benefit from explicit labels distinguishing white-box vs. black-box components and the role of the single reference image.
  2. [§3] §3 (Related Work): A brief comparison table of prior AR watermarking schemes (e.g., their embedding mechanisms and claimed robustness) would improve clarity and context for the new attacks.
  3. [Abstract] Abstract: The phrase 'do not reliably support' is strong; qualify it with the evaluated schemes and threat models to avoid overgeneralization.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript. The comments highlight important aspects of threat model specification and experimental clarity that we will address in revision. Below we respond point-by-point to the major comments.

read point-by-point responses
  1. Referee: [§4.2] §4.2 (Adversarial Optimization-based Attack): The attack description does not specify whether white-box (gradient) access to the watermark detector is required. Since the method relies on optimization to craft perturbations, this is load-bearing for the abstract's claim that attacks succeed 'without access to original model parameters or watermarking secrets'; if white-box detector access is implicitly assumed, the results do not fully support the broad conclusion that schemes 'do not reliably support synthetic content detection' in realistic black-box deployments. The relative success rates of the VQ regeneration and frequency injection attacks (which may be black-box) should be reported separately to isolate contributions.

    Authors: We agree that the adversarial optimization attack requires white-box access to the detector for gradient-based optimization, which was not explicitly stated. We will revise §4.2 to specify the access model for each attack. We will also report success rates for the VQ regeneration and frequency injection attacks separately (both of which operate with black-box query access to the detector and a single reference image). This distinction will be reflected in the abstract and conclusion to avoid overgeneralizing the white-box result while preserving the finding that black-box attacks already undermine reliable detection. revision: yes

  2. Referee: [§5] §5 (Evaluation): The experiments do not include an ablation on the number of reference watermarked images or a clear statement of the detector access model across all three attacks. This weakens the claim that attacks are effective 'with access to a single watermarked reference image,' as the success may depend on unstated assumptions about query access or gradient availability.

    Authors: We acknowledge the need for explicit access-model statements and an ablation. In the revision we will add a table in §5 detailing detector access assumptions (black-box query vs. white-box gradient) for all three attacks. We will also include an ablation varying the number of reference images (1, 5, 10) showing that both removal and forgery success rates remain high with a single image and improve only modestly with additional references. These additions will directly support the single-image claim under clearly stated conditions. revision: yes

Circularity Check

0 steps flagged

No circularity: empirical attack evaluation without self-referential derivations

full rationale

The paper reports experimental results from removal and forgery attacks on existing AR watermarking schemes, introducing three new attacks evaluated on reference images. No mathematical derivations, fitted parameters renamed as predictions, or load-bearing self-citations appear in the abstract or described structure. Claims rest on direct empirical outcomes rather than any chain that reduces to its own inputs by construction, satisfying the self-contained criterion.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

This is an empirical security evaluation of existing watermarking methods; no new mathematical constructs, fitted parameters, or postulated entities are introduced.

pith-pipeline@v0.9.0 · 5515 in / 1086 out tokens · 69050 ms · 2026-05-10T16:09:55.976985+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

47 extracted references · 10 canonical work pages · 3 internal anchors

  1. [1]

    In: ICLR (2024)

    Alemohammad,S.,Casco-Rodriguez,J.,Luzi,L.,Humayun,A.I.,Babaei,H.,LeJe- une, D., Siahkoohi, A., Baraniuk, R.: Self-consuming generative models go MAD. In: ICLR (2024)

  2. [2]

    In: ICML (2024)

    An, B., Ding, M., Rabbani, T., Agrawal, A., Xu, Y., Deng, C., Zhu, S., Mohamed, A., Wen, Y., Goldstein, T., Huang, F.: WAVES: benchmarking the robustness of image watermarks. In: ICML (2024)

  3. [3]

    In: Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., Scarlett, J

    Chang,H.,Zhang,H.,Barber,J.,Maschinot,A.,Lezama,J.,Jiang,L.,Yang,M.H., Murphy, K.P., Freeman, W.T., Rubinstein, M., Li, Y., Krishnan, D.: Muse: Text- to-image generation via masked generative transformers. In: Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., Scarlett, J. (eds.) ICML. Proceedings of Machine Learning Research, vol. 202, pp. 4...

  4. [4]

    In: CVPR

    Chang, H., Zhang, H., Jiang, L., Liu, C., Freeman, W.T.: Maskgit: Masked gener- ative image transformer. In: CVPR. pp. 11315–11325 (June 2022)

  5. [5]

    arXiv:2405.06135 (2024)

    Chern, E., Su, J., Ma, Y., Liu, P.: ANOLE: An open, autoregressive, native large multimodal models for interleaved image-text generation. arXiv:2405.06135 (2024)

  6. [6]

    In: ECCV (2024)

    Ci, H., Yang, P., Song, Y., Shou, M.Z.: RingID: Rethinking tree-ring watermarking for enhanced multi-key identification. In: ECCV (2024)

  7. [7]

    In: CVPR (2021)

    Esser, P., Rombach, R., Ommer, B.: Taming transformers for high-resolution image synthesis. In: CVPR (2021)

  8. [8]

    In: ICLR (2025)

    Fan, L., Li, T., Qin, S., Li, Y., Sun, C., Rubinstein, M., Sun, D., He, K., Tian, Y.: Fluid: Scaling autoregressive text-to-image generative models with continuous tokens. In: ICLR (2025)

  9. [9]

    In: CVPR (2023)

    Fernandez, P., Couairon, G., Jégou, H., Douze, M., Furon, T.: The stable signature: Rooting watermarks in latent diffusion models. In: CVPR (2023)

  10. [10]

    arXiv:2509.15208 (2025)

    Fernandez, P., Souček, T., Jovanović, N., Elsahar, H., Rebuffi, S.A., Lacatusu, V., Tran, T., Mourachko, A.: Geometric image synchronization with deep watermark- ing. arXiv:2509.15208 (2025)

  11. [11]

    In: ICLR (2025)

    Gunn, S., Zhao, X., Song, D.: An undetectable watermark for generative image models. In: ICLR (2025)

  12. [12]

    In: ICLR (2026)

    Han, C., Li, G., Wu, J., Sun, Q., Cai, Y., Peng, Y., Ge, Z., Zhou, D., Tang, H., Zhou, H., Liu, K., Xia, S.T., Jiao, B., Jiang, D., Zhang, X., Zhu, Y.: Nextstep-1: Toward autoregressive image generation with continuous tokens at scale. In: ICLR (2026)

  13. [13]

    In: CVPR

    Han, J., Liu, J., Jiang, Y., Yan, B., Zhang, Y., Yuan, Z., Peng, B., Liu, X.: Infinity: Scaling bitwise autoregressive modeling for high-resolution image synthesis. In: CVPR. pp. 15733–15744 (June 2025) 16 A. Müller et al

  14. [14]

    In: CVPR (June 2024)

    Hong, S., Lee, K., Jeon, S.Y., Bae, H., Chun, S.Y.: On exact inversion of dpm- solvers. In: CVPR (June 2024)

  15. [15]

    Memon, and Julian Togelius

    Jain, A., Kobayashi, Y., Murata, N., Takida, Y., Shibuya, T., Mitsufuji, Y., Cohen, N., Memon, N., Togelius, J.: Forging and removing latent-noise diffusion water- marks using a single image. arXiv:2504.20111 (2025)

  16. [16]

    In: NeurIPS (2025)

    Jovanović, N., Labiad, I., Soucek, T., Vechev, M., Fernandez, P.: Watermarking autoregressive image generation. In: NeurIPS (2025)

  17. [17]

    In: IEEE Symposium on Security and Privacy (SP)

    Kassis, A., Hengartner, U.: Unmarker: a universal attack on defensive image wa- termarking. In: IEEE Symposium on Security and Privacy (SP). pp. 2602–2620. IEEE (2025)

  18. [18]

    In: NeurIPS (2025)

    Kerner, L., Meintz, M., Zhao, B., Boenisch, F., Dziedzic, A.: Bitmark for infinity: Watermarking bitwise autoregressive image generative models. In: NeurIPS (2025)

  19. [19]

    In: ICML (2023)

    Kirchenbauer, J., Geiping, J., Wen, Y., Katz, J., Miers, I., Goldstein, T.: A water- mark for large language models. In: ICML (2023)

  20. [20]

    In: ICLR (2024)

    Kirchenbauer, J., Geiping, J., Wen, Y., Shu, M., Saifullah, K., Kong, K., Fernando, K., Saha, A., Goldblum, M., Goldstein, T.: On the reliability of watermarks for large language models. In: ICLR (2024)

  21. [21]

    In: CVPR

    Kumbong, H., Liu, X., Lin, T.Y., Liu, M.Y., Liu, X., Liu, Z., Fu, D.Y., Re, C., Romero, D.W.: Hmar: Efficient hierarchical masked auto-regressive image genera- tion. In: CVPR. pp. 2535–2544 (June 2025)

  22. [22]

    In: NeurIPS (2024)

    Li, T., Tian, Y., Li, H., Deng, M., He, K.: Autoregressive image generation without vector quantization. In: NeurIPS (2024)

  23. [23]

    In: ECCV (2014)

    Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft COCO: Common objects in context. In: ECCV (2014)

  24. [24]

    In: ICLR (2025)

    Liu, Y., Song, Y., Ci, H., Zhang, Y., Wang, H., Shou, M.Z., Bu, Y.: Image wa- termarks are removable using controllable regeneration from clean noise. In: ICLR (2025)

  25. [25]

    In: ICLR (2024)

    Lukas,N.,Diaa,A.,Fenaux,L.,Kerschbaum,F.:Leveragingoptimizationforadap- tive attacks on image watermarks. In: ICLR (2024)

  26. [26]

    In: CVPR (2026)

    Lukovnikov, D., Müller, A., Quiring, E., Fischer, A.: Clustermark: Towards robust watermarking for autoregressive image generators with visual token clustering. In: CVPR (2026)

  27. [27]

    In: CVPR (2025)

    Müller, A., Lukovnikov, D., Thietke, J., Fischer, A., Quiring, E.: Black-box forgery attacks on semantic watermarks for diffusion models. In: CVPR (2025)

  28. [28]

    In: CVPR

    Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: CVPR. pp. 10684–10695 (June 2022)

  29. [29]

    S., Berg, A

    Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet large scale visual recognition challenge. IJCV115(3), 211–252 (2015).https://doi. org/10.1007/s11263-015-0816-y

  30. [30]

    In: CVPR (2026)

    Shamshad, F., Lukas, N., Nandakumar, K.: Raven: Erasing invisible watermarks via novel view synthesis. In: CVPR (2026)

  31. [31]

    Nature (2024) https://doi.org/10.1038/s41586-024-07566-y

    Shumailov, I., Shumaylov, Z., Zhao, Y., Papernot, N., Anderson, R., Gal, Y.: Ai models collapse when trained on recursively generated data. Nature631(8022), 755–759 (2024).https://doi.org/10.1038/s41586-024-07566-y

  32. [32]

    In: ICLR (2021)

    Song, J., Meng, C., Ermon, S.: Denoising diffusion implicit models. In: ICLR (2021)

  33. [33]

    Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation

    Sun, P., Jiang, Y., Chen, S., Zhang, S., Peng, B., Luo, P., Yuan, Z.: Autoregres- sive model beats diffusion: Llama for scalable image generation. arXiv:2406.06525 (2024) On the Robustness of Watermarking for Autoregressive Image Generation 17

  34. [34]

    In: ICLR (2025)

    Tang, H., Wu, Y., Yang, S., Xie, E., Chen, J., Chen, J., Zhang, Z., Cai, H., Lu, Y., Han, S.: HART: Efficient visual generation with hybrid autoregressive transformer. In: ICLR (2025)

  35. [35]

    Chameleon: Mixed-Modal Early-Fusion Foundation Models

    Team, C.: Chameleon: Mixed-modal early-fusion foundation models. arXiv:2405.09818 (2025)

  36. [36]

    In: NeurIPS (2024)

    Tian, K., Jiang, Y., Yuan, Z., PENG, B., Wang, L.: Visual autoregressive modeling: Scalable image generation via next-scale prediction. In: NeurIPS (2024)

  37. [37]

    arXiv:2505.14673 (2025)

    Tong, Y., Pan, Z., Yang, S., Zhou, K.: Training-free watermarking for autoregres- sive image generation. arXiv:2505.14673 (2025)

  38. [38]

    Emu3: Next-Token Prediction is All You Need

    Wang, X., Zhang, X., Luo, Z., Sun, Q., Cui, Y., Wang, J., Zhang, F., Wang, Y., Li, Z., Yu, Q., Zhao, Y., Ao, Y., Min, X., Li, T., Wu, B., Zhao, B., Zhang, B., Wang, L., Liu, G., He, Z., Yang, X., Liu, J., Lin, Y., Huang, T., Wang, Z.: Emu3: Next-token prediction is all you need. arXiv:2409.18869 (2024)

  39. [39]

    In: ICCV (2025)

    Wang, Y., Lin, Z., Teng, Y., Zhu, Y., Ren, S., Feng, J., Liu, X.: Bridging continuous and discrete tokens for autoregressive visual generation. In: ICCV (2025)

  40. [40]

    In: NeurIPS (2023)

    Wen, Y., Kirchenbauer, J., Geiping, J., Goldstein, T.: Tree-Ring watermarks: In- visible fingerprints for diffusion images. In: NeurIPS (2023)

  41. [41]

    arXiv:2506.11371 (2025)

    Wu, Y., Cui, X., Chen, R., Milis, G., Huang, H.: A watermark for auto-regressive image generation models. arXiv:2506.11371 (2025)

  42. [42]

    (eds.) NeurIPS (2024)

    Yang, P., Ci, H., Song, Y., Shou, M.Z.: Can simple averaging defeat modern water- marks? In: Globerson, A., Mackey, L., Belgrave, D., Fan, A., Paquet, U., Tomczak, J., Zhang, C. (eds.) NeurIPS (2024)

  43. [43]

    In: CVPR (2024)

    Yang, Z., Zeng, K., Chen, K., Fang, H., Zhang, W., Yu, N.: Gaussian Shading: Provable performance-lossless image watermarking for diffusion models. In: CVPR (2024)

  44. [44]

    In: ICCV (October 2025)

    Yu, Q., He, J., Deng, X., Shen, X., Chen, L.C.: Randomized autoregressive visual generation. In: ICCV (October 2025)

  45. [45]

    In: CVPR (2018)

    Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: CVPR (2018)

  46. [46]

    NeurIPS (2024)

    Zhao, X., Zhang, K., Su, Z., Vasan, S., Grishchenko, I., Kruegel, C., Vigna, G., Wang, Y.X., Li, L.: Invisible image watermarks are provably removable using gen- erative ai. NeurIPS (2024)

  47. [47]

    WiYmvSloX2sUb/pvo+hUyD6xShs=

    Zhao, X., Zhang, K., Su, Z., Vasan, S., Grishchenko, I., Kruegel, C., Vigna, G., Wang, Y.X., Li, L.: Invisible image watermarks are provably removable using gen- erative AI. In: NeurIPS (2024) Supplementary Material for On the Robustness of Watermarking for Autoregressive Image Generation A Full Experimental Settings We provide full details on our experim...