pith. machine review for the scientific record. sign in

arxiv: 2605.04305 · v1 · submitted 2026-05-05 · 💻 cs.CL · cs.AI· cs.CR· cs.CY

Recognition: unknown

SWAN: Semantic Watermarking with Abstract Meaning Representation

Authors on Pith no claims yet

Pith reviewed 2026-05-08 16:56 UTC · model grok-4.3

classification 💻 cs.CL cs.AIcs.CRcs.CY
keywords semantic watermarkingabstract meaning representationparaphrase robustnesstext provenanceLLM watermarkingAMR graphsAI text detection
0
0 comments X

The pith

Watermarks encoded in sentence meaning graphs remain detectable after rephrasing.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

SWAN places the watermark signature inside the abstract meaning representation of a sentence rather than adjusting word choices during generation. Because the signature attaches to the underlying meaning, any rewrite that preserves intent will still contain the detectable pattern. Injection happens through prompts that guide a language model to follow a chosen meaning template, while detection parses the output with a standard tool and applies a basic statistical check. The design matters for tracking the source of generated text when people routinely reword sentences without altering their core sense, an everyday occurrence that breaks most existing watermark schemes.

Core claim

SWAN embeds watermark signatures into the semantic structure of a sentence using Abstract Meaning Representation. Watermark injection is achieved by prompting an LLM to generate sentences guided by a selected AMR template while maintaining contextual coherence, and detection uses an off-the-shelf AMR parser followed by a simple one-proportion z-test. This yields matching state-of-the-art detection performance on unaltered watermarked text while increasing detection AUC by up to 13.9 percentage points under paraphrasing on the RealNews benchmark.

What carries the argument

Abstract Meaning Representation (AMR) graphs, which encode sentence semantics as a structured graph; the watermark is carried by the choice of template graph so that any meaning-preserving paraphrase produces the same graph and thus the same detectable signature.

If this is right

  • Detection performance on original text equals leading token-selection watermark methods.
  • Robustness to paraphrasing rises by as much as 13.9 AUC points on news-domain text.
  • Both embedding and verification require no model training and work with ordinary language models and parsers.
  • The watermark lives at the semantic level, so stylistic edits that keep meaning leave it intact.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Similar watermarking could be attempted with any semantic parser shown to stay consistent across rephrasings.
  • The approach may prove useful for provenance checks in news or academic writing, where rewording for clarity is routine.
  • Experiments comparing AMR stability across different parsers and domains would clarify how widely the method applies.

Load-bearing premise

The AMR parser returns the same graph for any two sentences that express the same meaning.

What would settle it

A collection of meaning-preserving paraphrases of watermarked sentences for which the AMR parser returns inconsistent graphs, causing the z-test to miss the watermark at the reported rates.

Figures

Figures reproduced from arXiv: 2605.04305 by Anil Ramakrishna, Aram Galstyan, Charith Peris, Christos Christodoulopoulos, Gourab Dey, Kai-Wei Chang, Ninareh Mehrabi, Rahul Gupta, Weitong Ruan, Ziping Ye.

Figure 1
Figure 1. Figure 1: provides a visual depiction of this AMR structure, clearly illustrating the semantic relation￾ships captured by the graph representation view at source ↗
Figure 2
Figure 2. Figure 2: Overview of our proposed framework. In watermark injection, LLM repeatedly samples a sentence until its parsed AMR matches a secret template drawn from the bank; in watermark detection, we parse each sentence of a candidate paragraph, count AMR matches, and apply a z-test—because the watermark lives in the AMR graph, any paraphrase that preserves meaning leaves the signal intact. detectability, as in Synth… view at source ↗
Figure 3
Figure 3. Figure 3: Distribution of rejection sampling trials per view at source ↗
read the original abstract

We introduce SWAN (Semantic Watermarking with Abstract Meaning Representation), a novel framework that embeds watermark signatures into the semantic structure of a sentence using Abstract Meaning Representation (AMR). In contrast to existing watermarking methods, which typically encode signatures by adjusting token selection preferences during text generation, SWAN embeds the signature directly in the sentence's semantic representation. As the signature is encoded at the semantic structure level, any paraphrase that preserves meaning automatically preserves the signature. SWAN is training-free: watermark injection is achieved by prompting an LLM to generate sentences guided by a selected AMR template while maintaining contextual coherence, and detection uses an off-the-shelf AMR parser followed by a simple one-proportion z-test. Empirical evaluation on the RealNews benchmark shows SWAN matches state-of-the-art detection performance on unaltered watermarked text, while significantly improving robustness against paraphrasing, increasing detection AUC by up to 13.9 percentage points compared to prior methods. These results demonstrate that SWAN's approach of anchoring watermarks in AMR semantic structures provides a simple, effective, and prompt-based method for robust text provenance verification under paraphrasing, opening new avenues for semantic-level watermarking research.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper introduces SWAN, a training-free semantic watermarking method that selects an AMR template, prompts an LLM to realize it in contextually coherent text, and detects the watermark by parsing the output (or its paraphrase) with an off-the-shelf AMR parser followed by a one-proportion z-test on template match. On RealNews, SWAN matches prior SOTA detection AUC on unaltered watermarked text while reporting gains of up to 13.9 AUC points under paraphrasing.

Significance. If the central robustness result holds after addressing parser invariance, the work demonstrates a simple prompt-based route to semantic-level watermarking that is invariant to meaning-preserving edits by construction. This is a clear advance over token-level methods for provenance tasks where paraphrasing is expected, and the training-free nature plus use of existing AMR tools lowers the barrier to adoption.

major comments (2)
  1. [§4 and §4.2] §4 (Experimental Evaluation) and §4.2 (Paraphrasing Robustness): The reported 13.9 AUC gain under paraphrasing is load-bearing for the central claim, yet the manuscript contains no parser-agreement statistics (e.g., graph-edit distance or exact-match rate) between original and paraphrased watermarked sentences on the RealNews set, nor any ablation that isolates AMR-parser invariance from the watermarking effect itself. Without this, it is impossible to determine whether the robustness improvement arises from semantic anchoring or from the particular parser-paraphraser interaction.
  2. [§3.2] §3.2 (Detection Procedure): The z-test is performed on the proportion of recovered AMR graphs that match the chosen template, but the manuscript does not specify how the null distribution is constructed, what constitutes a “match,” or how multiple candidate templates are handled when the parser returns a graph that could align with more than one template; these choices directly affect the reported AUC numbers.
minor comments (2)
  1. [Abstract and §4] The abstract and §4 omit the exact AMR parser version, the paraphraser model, and the full set of baselines used for the “state-of-the-art” comparison; adding these details would improve reproducibility.
  2. [Results table] Table 1 (or equivalent results table) reports AUC values but does not include standard deviations across random seeds or template choices, making it difficult to assess whether the 13.9-point margin is statistically reliable.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback. The comments highlight important areas for improving the clarity and rigor of our experimental claims and detection procedure. We address each major comment below and will make the corresponding revisions to the manuscript.

read point-by-point responses
  1. Referee: [§4 and §4.2] §4 (Experimental Evaluation) and §4.2 (Paraphrasing Robustness): The reported 13.9 AUC gain under paraphrasing is load-bearing for the central claim, yet the manuscript contains no parser-agreement statistics (e.g., graph-edit distance or exact-match rate) between original and paraphrased watermarked sentences on the RealNews set, nor any ablation that isolates AMR-parser invariance from the watermarking effect itself. Without this, it is impossible to determine whether the robustness improvement arises from semantic anchoring or from the particular parser-paraphraser interaction.

    Authors: We agree that additional parser-agreement statistics and an isolating ablation would strengthen the central robustness claim. In the revised manuscript we will add these analyses: (i) quantitative agreement metrics (graph-edit distance and exact-match rate) between AMR parses of the original watermarked sentences and their paraphrases on the RealNews set, and (ii) an ablation that reports detection AUC on paraphrased text when the parser is given the correct generation template versus randomly chosen templates. These additions will allow readers to separate the contribution of semantic anchoring from any parser-paraphraser interaction effects. revision: yes

  2. Referee: [§3.2] §3.2 (Detection Procedure): The z-test is performed on the proportion of recovered AMR graphs that match the chosen template, but the manuscript does not specify how the null distribution is constructed, what constitutes a “match,” or how multiple candidate templates are handled when the parser returns a graph that could align with more than one template; these choices directly affect the reported AUC numbers.

    Authors: We acknowledge that the manuscript omitted the precise operational details of the z-test. In the revised §3.2 we will explicitly state: the null distribution is the empirical distribution of template-match proportions observed on a large held-out corpus of non-watermarked text; a match is defined by subgraph isomorphism with a maximum graph-edit distance threshold (we will report the exact threshold and similarity function); and when a parsed graph is compatible with multiple templates we assign it to the template with the highest similarity score (breaking ties by the template used at generation time if known). These clarifications will make the AUC numbers fully reproducible. revision: yes

Circularity Check

0 steps flagged

No circularity: empirical prompt-based method with external benchmarks

full rationale

The paper introduces an empirical watermarking framework that selects AMR templates, prompts an LLM for realization, and detects via off-the-shelf AMR parsing plus z-test. All performance claims (SOTA matching on clean text, +13.9 AUC under paraphrasing) rest on reported benchmark results on RealNews rather than any derivation, fitted parameter, or self-citation chain. No equations, ansatzes, or uniqueness theorems appear; the central robustness argument is an external empirical observation about semantic invariance, not a reduction to the method's own inputs. The approach is therefore self-contained against external benchmarks and receives the default non-circularity finding.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

Relies on LLM prompting fidelity and AMR parser accuracy.

free parameters (1)
  • AMR template
    Selected for coherence, details unspecified.
axioms (1)
  • domain assumption AMR graphs capture meaning equivalence under paraphrase
    Central to robustness claim.

pith-pipeline@v0.9.0 · 8772 in / 924 out tokens · 108459 ms · 2026-05-08T16:56:21.178107+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

61 extracted references · 25 canonical work pages · 1 internal anchor

  1. [1]

    2024 , url =

    Amazon Artificial General Intelligence , title =. 2024 , url =

  2. [2]

    2024 , eprint=

    GPT-4 Technical Report , author=. 2024 , eprint=

  3. [3]

    2024 , eprint=

    DeepSeek-V3 Technical Report , author=. 2024 , eprint=

  4. [4]

    2024 , eprint=

    A Watermark for Large Language Models , author=. 2024 , eprint=

  5. [5]

    SemStamp : A Semantic Watermark with Paraphrastic Robustness for Text Generation

    Hou, Abe Bohan and Zhang, Jingyu and He, Tianxing and Chuang, Yung-Sung and Wang, Hongwei and Shen, Lingfeng and Van Durme, Benjamin and Khashabi, Daniel and Tsvetkov, Yulia. SemStamp : A Semantic Watermark with Paraphrastic Robustness for Text Generation. Annual Conference of the North American Chapter of the Association for Computational Linguistics. 2023

  6. [8]

    2024 , eprint=

    MASSIVE Multilingual Abstract Meaning Representation: A Dataset and Baselines for Hallucination Detection , author=. 2024 , eprint=

  7. [9]

    A bstract M eaning R epresentation for Sembanking

    Banarescu, Laura and Bonial, Claire and Cai, Shu and Georgescu, Madalina and Griffitt, Kira and Hermjakob, Ulf and Knight, Kevin and Koehn, Philipp and Palmer, Martha and Schneider, Nathan. A bstract M eaning R epresentation for Sembanking. Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse. 2013

  8. [10]

    2023 , eprint=

    ParaAMR: A Large-Scale Syntactically Diverse Paraphrase Dataset by AMR Back-Translation , author=. 2023 , eprint=

  9. [11]

    Exploiting A bstract M eaning R epresentation for Open-Domain Question Answering

    Wang, Cunxiang and Xu, Zhikun and Guo, Qipeng and Hu, Xiangkun and Bai, Xuefeng and Zhang, Zheng and Zhang, Yue. Exploiting A bstract M eaning R epresentation for Open-Domain Question Answering. Findings of the Association for Computational Linguistics: ACL 2023. 2023. doi:10.18653/v1/2023.findings-acl.131

  10. [12]

    and Raskin, Victor and Crogan, Michael and Hempelmann, Christian and Kerschbaum, Florian and Mohamed, Dina and Naik, Sanket , title =

    Atallah, Mikhail J. and Raskin, Victor and Crogan, Michael and Hempelmann, Christian and Kerschbaum, Florian and Mohamed, Dina and Naik, Sanket , title =. Proceedings of the 4th International Workshop on Information Hiding , pages =. 2001 , isbn =

  11. [13]

    2024 , eprint=

    PostMark: A Robust Blackbox Watermark for Large Language Models , author=. 2024 , eprint=

  12. [14]

    2023 , eprint=

    Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , author=. 2023 , eprint=

  13. [16]

    2020 , eprint=

    PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization , author=. 2020 , eprint=

  14. [17]

    Parrot Paraphraser , author =

  15. [18]

    2020 , url =

    Brad Jascob , title =. 2020 , url =

  16. [19]

    2023 , eprint=

    Exploring the Use of Large Language Models for Reference-Free Text Quality Evaluation: An Empirical Study , author=. 2023 , eprint=

  17. [20]

    2023 , eprint=

    Robust Multi-bit Natural Language Watermarking through Invariant Features , author=. 2023 , eprint=

  18. [21]

    2023 , eprint=

    Watermarking Pre-trained Language Models with Backdooring , author=. 2023 , eprint=

  19. [22]

    Stamatatos, Efstathios , title =. J. Am. Soc. Inf. Sci. Technol. , month = mar, pages =. 2009 , issue_date =

  20. [24]

    Continuous N-gram Representations for Authorship Attribution

    Sari, Yunita and Vlachos, Andreas and Stevenson, Mark. Continuous N-gram Representations for Authorship Attribution. Proceedings of the 15th Conference of the E uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers. 2017

  21. [25]

    2022 , eprint=

    PART: Pre-trained Authorship Representation Transformer , author=. 2022 , eprint=

  22. [27]

    2024 , eprint=

    Do LLMs Know to Respect Copyright Notice? , author=. 2024 , eprint=

  23. [28]

    2024 , eprint=

    Large language models can consistently generate high-quality content for election disinformation operations , author=. 2024 , eprint=

  24. [29]

    2023 , eprint=

    Provable Robust Watermarking for AI-Generated Text , author=. 2023 , eprint=

  25. [30]

    2024 , eprint=

    On the Reliability of Watermarks for Large Language Models , author=. 2024 , eprint=

  26. [31]

    2024 , eprint=

    Robust Distortion-free Watermarks for Language Models , author=. 2024 , eprint=

  27. [32]

    2019 , howpublished =

    Banarescu, Laura and Bonial, Claire and Cai, Shu and Georgescu, Madalina and Griffitt, Kira and Hermjakob, Ulf and Knight, Kevin and Koehn, Philipp and Palmer, Martha and Schneider, Nathan , title =. 2019 , howpublished =

  28. [33]

    2023 , eprint=

    Red Teaming Language Model Detectors with Language Models , author=. 2023 , eprint=

  29. [35]

    2024 , eprint=

    A Semantic Invariant Robust Watermark for Large Language Models , author=. 2024 , eprint=

  30. [36]

    2024 , eprint=

    A Robust Semantics-based Watermark for Large Language Model against Paraphrasing , author=. 2024 , eprint=

  31. [37]

    Atallah, Victor Raskin, Michael Crogan, Christian Hempelmann, Florian Kerschbaum, Dina Mohamed, and Sanket Naik

    Mikhail J. Atallah, Victor Raskin, Michael Crogan, Christian Hempelmann, Florian Kerschbaum, Dina Mohamed, and Sanket Naik. 2001. Natural language watermarking: Design, analysis, and a proof-of-concept implementation. In Proceedings of the 4th International Workshop on Information Hiding, IHW '01, page 185–199, Berlin, Heidelberg. Springer-Verlag

  32. [38]

    Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. https://aclanthology.org/W13-2322/ A bstract M eaning R epresentation for sembanking . In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178--18...

  33. [39]

    Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2019. Abstract meaning representation ( AMR ) 1.2.6 specification. https://github.com/amrisi/amr-guidelines/blob/master/amr.md. Accessed: 2025-05-12

  34. [40]

    Michael Brennan, Sadia Afroz, and Rachel Greenstadt. 2012. https://doi.org/10.1145/2382448.2382450 Adversarial stylometry: Circumventing authorship recognition to preserve privacy and anonymity . ACM Trans. Inf. Syst. Secur., 15(3)

  35. [41]

    Yapei Chang, Kalpesh Krishna, Amir Houmansadr, John Wieting, and Mohit Iyyer. 2024. https://arxiv.org/abs/2406.14517 Postmark: A robust blackbox watermark for large language models . Preprint, arXiv:2406.14517

  36. [42]

    Yi Chen, Rui Wang, Haiyun Jiang, Shuming Shi, and Ruifeng Xu. 2023. https://arxiv.org/abs/2304.00723 Exploring the use of large language models for reference-free text quality evaluation: An empirical study . Preprint, arXiv:2304.00723

  37. [43]

    Prithiviraj Damodaran. 2021. Parrot paraphraser. https://github.com/PrithivirajDamodaran/Parrot_Paraphraser

  38. [44]

    Scalable watermarking for identifying large language model outputs

    Sumanth Dathathri, Abigail See, Shubham Ghaisas, Pierre - Sacha H \" u rlimann, Jacob Walker, Brian Bartoldson, Rohan Mukherjee, Aditya Sen, Varun Bansal, Rohan Bhasin, Michael A. Munn, Alexey Korotkevich, Rishabh Singh, Thomas Mensink, James Hennessey, Nisanth Venkateswaran, Benjamin Bichsel, Thomas Cooijmans, Zoubin Ghahramani, and 6 others. 2024. https...

  39. [45]

    Chenxi Gu, Chengsong Huang, Xiaoqing Zheng, Kai-Wei Chang, and Cho-Jui Hsieh. 2023. https://arxiv.org/abs/2210.07543 Watermarking pre-trained language models with backdooring . Preprint, arXiv:2210.07543

  40. [46]

    Abe Bohan Hou, Jingyu Zhang, Tianxing He, Yung-Sung Chuang, Hongwei Wang, Lingfeng Shen, Benjamin Van Durme, Daniel Khashabi, and Yulia Tsvetkov. 2023. https://arxiv.org/abs/2310.03991 SemStamp : A semantic watermark with paraphrastic robustness for text generation . In Annual Conference of the North American Chapter of the Association for Computational L...

  41. [47]

    Abe Bohan Hou, Jingyu Zhang, Yichen Wang, Daniel Khashabi, and Tianxing He. 2024. https://doi.org/10.18653/v1/2024.findings-acl.98 k-SemStamp : A clustering-based semantic watermark for detection of machine-generated text . In Findings of the Association for Computational Linguistics: ACL 2024, pages 1706--1715, Bangkok, Thailand. Association for Computat...

  42. [48]

    Javier Huertas-Tato, Alejandro Martin, and David Camacho. 2022. https://arxiv.org/abs/2209.15373 Part: Pre-trained authorship representation transformer . Preprint, arXiv:2209.15373

  43. [49]

    Piotr Indyk and Rajeev Motwani. 1998. https://doi.org/10.1145/276698.276876 Approximate nearest neighbors: towards removing the curse of dimensionality . In Proceedings of the Thirtieth Annual ACM Symposium on Theory of Computing, STOC '98, page 604–613, New York, NY, USA. Association for Computing Machinery

  44. [50]

    Brad Jascob. 2020. https://github.com/bjascob/amrlib amrlib: A python library that makes amr parsing, generation and visualization simple . Accessed: 2025-05-12

  45. [51]

    John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, and Tom Goldstein. 2024 a . https://arxiv.org/abs/2301.10226 A watermark for large language models . Preprint, arXiv:2301.10226

  46. [52]

    John Kirchenbauer, Jonas Geiping, Yuxin Wen, Manli Shu, Khalid Saifullah, Kezhi Kong, Kasun Fernando, Aniruddha Saha, Micah Goldblum, and Tom Goldstein. 2024 b . https://arxiv.org/abs/2306.04634 On the reliability of watermarks for large language models . Preprint, arXiv:2306.04634

  47. [53]

    Moshe Koppel, Jonathan Schler, and Shlomo Argamon. 2009. https://doi.org/10.1002/asi.20961 Computational methods in authorship attribution . Journal of the American Society for Information Science and Technology, 60(1):9--26

  48. [54]

    Rohith Kuditipudi, John Thickstun, Tatsunori Hashimoto, and Percy Liang. 2024. https://arxiv.org/abs/2307.15593 Robust distortion-free watermarks for language models . Preprint, arXiv:2307.15593

  49. [55]

    Aiwei Liu, Leyi Pan, Xuming Hu, Shiao Meng, and Lijie Wen. 2024. https://arxiv.org/abs/2310.06356 A semantic invariant robust watermark for large language models . Preprint, arXiv:2310.06356

  50. [56]

    Juri Opitz, Letitia Parcalabescu, and Anette Frank. 2020. https://doi.org/10.1162/tacl_a_00329 AMR similarity metrics from principles . Transactions of the Association for Computational Linguistics, 8:522--538

  51. [57]

    Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2023. https://arxiv.org/abs/1910.10683 Exploring the limits of transfer learning with a unified text-to-text transformer . Preprint, arXiv:1910.10683

  52. [58]

    Michael Regan, Shira Wein, George Baker, and Emilio Monti. 2024. https://arxiv.org/abs/2405.19285 Massive multilingual abstract meaning representation: A dataset and baselines for hallucination detection . Preprint, arXiv:2405.19285

  53. [59]

    Jie Ren, Han Xu, Yiding Liu, Yingqian Cui, Shuaiqiang Wang, Dawei Yin, and Jiliang Tang. 2024. https://arxiv.org/abs/2311.08721 A robust semantics-based watermark for large language model against paraphrasing . Preprint, arXiv:2311.08721

  54. [60]

    Yunita Sari, Andreas Vlachos, and Mark Stevenson. 2017. https://aclanthology.org/E17-2043/ Continuous n-gram representations for authorship attribution . In Proceedings of the 15th Conference of the E uropean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers , pages 267--273, Valencia, Spain. Association for Computational Li...

  55. [61]

    Zhouxing Shi, Yihan Wang, Fan Yin, Xiangning Chen, Kai-Wei Chang, and Cho-Jui Hsieh. 2023. https://arxiv.org/abs/2305.19713 Red teaming language model detectors with language models . Preprint, arXiv:2305.19713

  56. [62]

    Efstathios Stamatatos. 2009. A survey of modern authorship attribution methods. J. Am. Soc. Inf. Sci. Technol., 60(3):538–556

  57. [63]

    Williams, Liam Burke-Moore, Ryan Sze-Yin Chan, Florence E

    Angus R. Williams, Liam Burke-Moore, Ryan Sze-Yin Chan, Florence E. Enock, Federico Nanni, Tvesha Sippy, Yi-Ling Chung, Evelina Gabasova, Kobi Hackenburg, and Jonathan Bright. 2024. https://arxiv.org/abs/2408.06731 Large language models can consistently generate high-quality content for election disinformation operations . Preprint, arXiv:2408.06731

  58. [64]

    Jialiang Xu, Shenglan Li, Zhaozhuo Xu, and Denghui Zhang. 2024. https://arxiv.org/abs/2411.01136 Do llms know to respect copyright notice? Preprint, arXiv:2411.01136

  59. [65]

    KiYoon Yoo, Wonhyuk Ahn, Jiho Jang, and Nojun Kwak. 2023. https://arxiv.org/abs/2305.01904 Robust multi-bit natural language watermarking through invariant features . Preprint, arXiv:2305.01904

  60. [66]

    Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2020. https://arxiv.org/abs/1912.08777 Pegasus: Pre-training with extracted gap-sentences for abstractive summarization . Preprint, arXiv:1912.08777

  61. [67]

    Xuandong Zhao, Prabhanjan Ananth, Lei Li, and Yu-Xiang Wang. 2023. https://arxiv.org/abs/2306.17439 Provable robust watermarking for ai-generated text . Preprint, arXiv:2306.17439