Recognition: 2 theorem links
· Lean TheoremIgnore Previous Prompt: Attack Techniques For Language Models
Pith reviewed 2026-05-11 13:53 UTC · model grok-4.3
The pith
Simple handcrafted inputs can misalign GPT-3 via goal hijacking and prompt leaking.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
GPT-3 can be easily misaligned by simple handcrafted inputs using PromptInject, a prosaic alignment framework for mask-based iterative adversarial prompt composition, enabling goal hijacking and prompt leaking that exploit the model's stochastic nature and create long-tail risks even for low-aptitude attackers.
What carries the argument
PromptInject, a framework for mask-based iterative adversarial prompt composition that constructs prompts to override or extract from the target model.
If this is right
- Production deployments of GPT-3 face long-tail risks from basic attacks.
- Low-aptitude but ill-intentioned users can override intended model behavior.
- Prompt leaking can expose system instructions that are meant to stay hidden.
- Stochastic responses make such exploits hard to anticipate or block in advance.
Where Pith is reading between the lines
- Other large language models likely share similar prompt-level vulnerabilities.
- Automated or iterative versions of PromptInject could scale the attacks further.
- Deployed systems may need input sanitization or output monitoring to limit exposure.
Load-bearing premise
Handcrafted adversarial prompts will reliably succeed against production GPT-3 instances without triggering safety filters or detection mechanisms.
What would settle it
Repeated tests in which every handcrafted prompt is blocked by GPT-3's safety mechanisms and produces neither hijacking nor leaking.
read the original abstract
Transformer-based large language models (LLMs) provide a powerful foundation for natural language tasks in large-scale customer-facing applications. However, studies that explore their vulnerabilities emerging from malicious user interaction are scarce. By proposing PromptInject, a prosaic alignment framework for mask-based iterative adversarial prompt composition, we examine how GPT-3, the most widely deployed language model in production, can be easily misaligned by simple handcrafted inputs. In particular, we investigate two types of attacks -- goal hijacking and prompt leaking -- and demonstrate that even low-aptitude, but sufficiently ill-intentioned agents, can easily exploit GPT-3's stochastic nature, creating long-tail risks. The code for PromptInject is available at https://github.com/agencyenterprise/PromptInject.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes PromptInject, a mask-based iterative framework for adversarial prompt composition, and uses it to demonstrate two classes of attacks on GPT-3: goal hijacking and prompt leaking. Through handcrafted examples in Sections 4 and 5, the authors argue that even low-aptitude but ill-intentioned users can easily misalign the model by exploiting its stochastic sampling, thereby creating long-tail security risks. The associated code is released on GitHub.
Significance. If the attacks can be shown to succeed at non-negligible rates under realistic conditions, the work would be significant for LLM safety research, as it identifies a practical attack surface in widely deployed models and supplies reusable tooling. The open-source release is a clear strength that enables reproducibility and extension by others.
major comments (2)
- [Sections 4 and 5] Sections 4 and 5: The paper presents only single illustrative prompt examples that succeed; no success fractions, number of trials, temperature sweeps, or failure-mode statistics are reported. Without these data the transition from 'possible' to 'easily' and 'long-tail risks' (Abstract) cannot be evaluated.
- [Section 3] Section 3 and Abstract: The PromptInject framework is introduced as an iterative, mask-based method, yet the reported attacks appear to be static handcrafted strings. The manuscript must clarify whether the framework was actually used to produce the examples or whether the demonstrations rely on manual construction alone.
minor comments (1)
- [Abstract] Abstract: the phrase 'prosaic alignment framework' is introduced without a concise definition or contrast to existing alignment techniques.
Simulated Author's Rebuttal
We thank the referee for their constructive comments. We address each major point below and will revise the manuscript to incorporate clarifications and additional supporting data where appropriate.
read point-by-point responses
-
Referee: [Sections 4 and 5] Sections 4 and 5: The paper presents only single illustrative prompt examples that succeed; no success fractions, number of trials, temperature sweeps, or failure-mode statistics are reported. Without these data the transition from 'possible' to 'easily' and 'long-tail risks' (Abstract) cannot be evaluated.
Authors: We acknowledge that Sections 4 and 5 rely on single illustrative examples to demonstrate the attacks. The manuscript's primary aim is to show that such misalignments are feasible with simple prompts by exploiting GPT-3's stochastic sampling, rather than providing a full statistical characterization. To better support the claims of 'easily' and 'long-tail risks' in the Abstract, we will add quantitative results in the revision, including success rates over multiple trials, temperature sweeps, and failure-mode statistics. revision: yes
-
Referee: [Section 3] Section 3 and Abstract: The PromptInject framework is introduced as an iterative, mask-based method, yet the reported attacks appear to be static handcrafted strings. The manuscript must clarify whether the framework was actually used to produce the examples or whether the demonstrations rely on manual construction alone.
Authors: PromptInject is presented as a mask-based iterative framework for composing adversarial prompts in a systematic manner. The specific examples in Sections 4 and 5 were manually handcrafted to provide clear, reproducible illustrations of goal hijacking and prompt leaking. We will revise the text in Section 3 and the Abstract to explicitly distinguish the framework's general capability from the handcrafted demonstrations used for exposition, and we will include a brief example of applying the framework to generate one such attack. revision: yes
Circularity Check
Empirical attack demonstration with no derivation chain or fitted predictions
full rationale
The paper proposes the PromptInject framework and presents qualitative demonstrations of goal hijacking and prompt leaking attacks on GPT-3 via handcrafted prompts. It contains no equations, no parameter fitting, no predictions derived from inputs, and no load-bearing self-citations or uniqueness theorems. The central claims rest on illustrative examples in Sections 4-5 rather than any reduction of outputs to inputs by construction. This is a standard empirical security study whose reasoning is self-contained and does not exhibit any of the enumerated circularity patterns.
Axiom & Free-Parameter Ledger
Forward citations
Cited by 53 Pith papers
-
Comment and Control: Hijacking Agentic Workflows via Context-Grounded Evolution
JAW uses hybrid program analysis to evolve inputs that hijack agentic workflows, successfully compromising 4714 GitHub workflows and eight n8n templates to enable actions like credential exfiltration.
-
OTora: A Unified Red Teaming Framework for Reasoning-Level Denial-of-Service in LLM Agents
OTora provides the first unified framework for reasoning-level denial-of-service attacks on LLM agents, achieving up to 10x more reasoning tokens and order-of-magnitude latency increases while preserving task accuracy...
-
Ghost in the Agent: Redefining Information Flow Tracking for LLM Agents
NeuroTaint is the first taint tracking framework for LLM agents that uses offline auditing of semantic, causal, and persistent context to detect flows from untrusted sources to privileged sinks.
-
Your Agent Is Mine: Measuring Malicious Intermediary Attacks on the LLM Supply Chain
Malicious LLM API routers actively perform payload injection and secret exfiltration, with 9 of 428 tested routers showing malicious behavior and further poisoning risks from leaked credentials.
-
AgentDojo: A Dynamic Environment to Evaluate Prompt Injection Attacks and Defenses for LLM Agents
AgentDojo introduces an extensible evaluation framework populated with realistic agent tasks and security test cases to measure prompt injection robustness in tool-using LLM agents.
-
IPI-proxy: An Intercepting Proxy for Red-Teaming Web-Browsing AI Agents Against Indirect Prompt Injection
IPI-proxy is a toolkit using an intercepting proxy to inject indirect prompt injection attacks into live web pages for testing AI browsing agents against hidden instructions.
-
ContextualJailbreak: Evolutionary Red-Teaming via Simulated Conversational Priming
ContextualJailbreak uses evolutionary search over simulated primed dialogues with novel mutations to reach 90-100% attack success on open LLMs and transfers to some closed frontier models at 15-90% rates.
-
Perturbation Dose Responses in Recursive LLM Loops: Raw Switching, Stochastic Floors, and Persistent Escape under Append, Replace, and Dialog Updates
In 30-step recursive LLM loops, append-mode persistent escape from source basins reaches 50% near 400 tokens under full history but plateaus below 50% under tail-clip memory policy, while replace-mode switching largel...
-
When Alignment Isn't Enough: Response-Path Attacks on LLM Agents
A malicious relay can strategically rewrite aligned LLM outputs in BYOK agent architectures to achieve up to 99.1% attack success on benchmarks like AgentDojo and ASB.
-
Needle-in-RAG: Prompt-Conditioned Character-Level Traceback of Poisoned Spans in Retrieved Evidence
RAGCharacter localizes poisoned character spans in RAG evidence via prompt-conditioned counterfactual masking and achieves the best accuracy-over-attribution trade-off across tested attacks and models.
-
AgentVisor: Defending LLM Agents Against Prompt Injection via Semantic Virtualization
AgentVisor cuts prompt injection success rate to 0.65% in LLM agents with only 1.45% utility loss via semantic privilege separation and one-shot self-correction.
-
Cross-Session Threats in AI Agents: Benchmark, Evaluation, and Algorithms
Introduces CSTM-Bench with 26 cross-session attack taxonomies, demonstrates recall loss in session-bound and full-log detectors, and proposes a bounded-memory coreset reader with the CSTM metric balancing detection an...
-
Hijacking Large Audio-Language Models via Context-Agnostic and Imperceptible Auditory Prompt Injection
AudioHijack generates imperceptible adversarial audio via gradient estimation, attention supervision, and reverberation blending to hijack 13 LALMs with 79-96% success on unseen contexts and real commercial agents.
-
Eliciting Latent Predictions from Transformers with the Tuned Lens
Training per-layer affine probes on frozen transformers yields more reliable latent predictions than the logit lens and enables detection of malicious inputs from prediction trajectories.
-
Sleeper Channels and Provenance Gates: Persistent Prompt Injection in Always-on Autonomous AI Agents
Sleeper channels enable persistent prompt injection in always-on AI agents via persistence substrate and firing separation, countered by provenance gates using action digests and owner attestations with a soundness theorem.
-
Leveraging RAG for Training-Free Alignment of LLMs
RAG-Pref is a training-free RAG-based alignment technique that conditions LLMs on contrastive preference samples during inference, yielding over 3.7x average improvement in agentic attack refusals when combined with o...
-
PAAC: Privacy-Aware Agentic Device-Cloud Collaboration
PAAC aligns planner-executor decomposition with the device-cloud boundary via typed placeholders and on-device sanitization, delivering 15-36% higher accuracy and 2-6x lower leakage than prior device-cloud baselines o...
-
ClawGuard: Out-of-Band Detection of LLM Agent Workflow Hijacking via EM Side Channel
ClawGuard detects LLM agent workflow hijacking by capturing and classifying electromagnetic emanations from hardware with 0.9945 AUC, 100% true-positive rate, and 1.16% false-positive rate on a 7.82 TB RF dataset.
-
LoopTrap: Termination Poisoning Attacks on LLM Agents
LoopTrap is an automated red-teaming framework that crafts termination-poisoning prompts to amplify LLM agent steps by 3.57x on average (up to 25x) across 8 agents.
-
Paraphrase-Induced Output-Mode Collapse: When LLMs Break Character Under Semantically Equivalent Inputs
LLMs show systematic output-mode collapse on closed-form prompts, with only ~22% of semantically equivalent variants preserving the requested bare-label format across five models and four tasks.
-
Paraphrase-Induced Output-Mode Collapse: When LLMs Break Character Under Semantically Equivalent Inputs
LLMs exhibit prompt-variant output-mode collapse, preserving requested bare-label formats in only about 22% of semantically equivalent prompt variants across tested models and tasks.
-
ARGUS: Defending LLM Agents Against Context-Aware Prompt Injection
ARGUS defends LLM agents from context-aware prompt injections by tracking information provenance and verifying decisions against trustworthy evidence, reducing attack success to 3.8% while retaining 87.5% task utility.
-
LocalAlign: Enabling Generalizable Prompt Injection Defense via Generation of Near-Target Adversarial Examples for Alignment Training
LocalAlign generates near-target adversarial examples via prompting and applies margin-aware alignment training to enforce tighter boundaries against prompt injection attacks.
-
A Sentence Relation-Based Approach to Sanitizing Malicious Instructions
SONAR constructs a relational graph from entailment and contradiction scores to prune injected malicious sentences from LLM prompts while preserving context, achieving near-zero attack success rates.
-
FlashRT: Towards Computationally and Memory Efficient Red-Teaming for Prompt Injection and Knowledge Corruption
FlashRT delivers 2x-7x speedup and 2x-4x GPU memory reduction for prompt injection and knowledge corruption attacks on long-context LLMs versus nanoGCG.
-
Evaluation of Prompt Injection Defenses in Large Language Models
Output filtering implemented in application code is the only defense that survived an adaptive prompt-injection attacker across 15,000 attacks; model-based defenses all broke.
-
When AI reviews science: Can we trust the referee?
AI peer review systems are vulnerable to prompt injections, prestige biases, assertion strength effects, and contextual poisoning, as demonstrated by a new attack taxonomy and causal experiments on real conference sub...
-
Structural Quality Gaps in Practitioner AI Governance Prompts: An Empirical Study Using a Five-Principle Evaluation Framework
A new five-principle framework applied to 34 practitioner AI governance prompts finds 37% lack key structural elements such as data classification and rubrics.
-
SafetyALFRED: Evaluating Safety-Conscious Planning of Multimodal Large Language Models
SafetyALFRED shows multimodal LLMs recognize kitchen hazards accurately in QA tests but achieve low success rates when required to mitigate those hazards through embodied planning.
-
How Adversarial Environments Mislead Agentic AI?
Adversarial compromise of tool outputs misleads agentic AI via breadth and depth attacks, revealing that epistemic and navigational robustness are distinct and often trade off against each other.
-
Towards Understanding the Robustness of Sparse Autoencoders
Integrating pretrained sparse autoencoders into LLM residual streams reduces jailbreak success rates by up to 5x across multiple models and attacks.
-
ClawGuard: A Runtime Security Framework for Tool-Augmented LLM Agents Against Indirect Prompt Injection
ClawGuard enforces user-derived access constraints at tool-call boundaries to block indirect prompt injection in tool-augmented LLM agents across web, MCP, and skill injection channels.
-
ClawGuard: A Runtime Security Framework for Tool-Augmented LLM Agents Against Indirect Prompt Injection
ClawGuard enforces deterministic, user-derived access constraints at tool boundaries to block indirect prompt injection without changing the underlying LLM.
-
BadSkill: Backdoor Attacks on Agent Skills via Model-in-Skill Poisoning
BadSkill poisons embedded models in agent skills to achieve up to 99.5% attack success rate on triggered tasks with only 3% poison rate while preserving normal behavior on non-trigger inputs.
-
When Safety Geometry Collapses: Fine-Tuning Vulnerabilities in Agentic Guard Models
Benign fine-tuning collapses safety geometry in guard models like Granite Guardian, dropping refusal to 0%, but Fisher-Weighted Safety Subspace Regularization restores it to 75% while improving robustness.
-
Phase-Associative Memory: Sequence Modeling in Complex Hilbert Space
PAM, a complex-valued associative memory model, exhibits steeper power-law scaling in loss and perplexity than a matched real-valued baseline when trained on WikiText-103 from 5M to 100M parameters.
-
Baseline Defenses for Adversarial Attacks Against Aligned Language Models
Baseline defenses including perplexity-based detection, input preprocessing, and adversarial training offer partial robustness to text adversarial attacks on LLMs, with challenges arising from weak discrete optimizers.
-
Engineering Robustness into Personal Agents with the AI Workflow Store
AI agents should shift from on-the-fly plan synthesis to invoking pre-engineered, tested, and reusable workflows stored in an AI Workflow Store to gain reliability and security.
-
When Agents Handle Secrets: A Survey of Confidential Computing for Agentic AI
A survey providing a taxonomy of TEE platforms, an agent-centric threat model, and open challenges for applying confidential computing to secure agentic AI systems.
-
TRUST: A Framework for Decentralized AI Service v.0.1
TRUST is a decentralized AI auditing framework that decomposes reasoning into HDAGs, maps agent interactions via the DAAN protocol to CIGs, and uses stake-weighted multi-tier consensus to achieve 72.4% accuracy while ...
-
FSFM: A Biologically-Inspired Framework for Selective Forgetting of Agent Memory
FSFM is a biologically-inspired selective forgetting framework for LLM agents that claims to boost access efficiency by 8.49%, content quality by 29.2% signal-to-noise, and eliminate security risks entirely through a ...
-
enclawed: A Configurable, Sector-Neutral Hardening Framework for Single-User AI Assistant Gateways
enclawed is a sector-neutral hardening framework for AI gateways providing signed modules, audit trails, peer attestation, and a 356-case test suite for regulated deployments.
-
RefineRAG: Word-Level Poisoning Attacks via Retriever-Guided Text Refinement
RefineRAG achieves 90% attack success on NQ by generating toxic seeds then optimizing them via retriever-in-the-loop word refinement, outperforming prior methods on effectiveness and naturalness.
-
SALLIE: Safeguarding Against Latent Language & Image Exploits
SALLIE detects jailbreaks in text and vision-language models by extracting residual stream activations, scoring maliciousness per layer with k-NN, and ensembling predictions, outperforming baselines on multiple datasets.
-
Engineering Robustness into Personal Agents with the AI Workflow Store
AI agents require pre-engineered reusable workflows stored in a central repository rather than generating plans on the fly to achieve production-grade reliability and security.
-
MIPIAD: Multilingual Indirect Prompt Injection Attack Defense with Qwen -- TF-IDF Hybrid and Meta-Ensemble Learning
MIPIAD reports a hybrid Qwen-TF-IDF ensemble defense that reaches F1 0.9205 and reduces the English-Bangla performance gap on a 1.43-million-sample synthetic benchmark derived from BIPIA templates.
-
When Agents Handle Secrets: A Survey of Confidential Computing for Agentic AI
A structured survey of confidential computing for agentic AI that catalogs TEE platforms, agent-specific threats, transferable defenses, and remaining gaps in end-to-end frameworks.
-
Making AI-Assisted Grant Evaluation Auditable without Exposing the Model
A TEE-based remote attestation system creates signed evaluation bundles that link input hashes, model measurements, and outputs to make AI grant reviews verifiable without revealing proprietary components.
-
enclawed: A Configurable, Sector-Neutral Hardening Framework for Single-User AI Assistant Gateways
enclawed is a two-flavor hardening framework for OpenClaw AI gateways that supplies attestable trust, strict allowlists, FIPS crypto assertion, DLP signals, and a 204-case test suite for regulated-industry deployments.
-
Fully Homomorphic Encryption on Llama 3 model for privacy preserving LLM inference
A modified Llama 3 model using fully homomorphic encryption achieves up to 98% text generation accuracy and 80 tokens per second at 237 ms latency on an i9 CPU.
-
Beyond Static Sandboxing: Learned Capability Governance for Autonomous AI Agents
Aethelgard is a learned governance system that scopes AI agent capabilities to the minimum needed for each task type using PPO policy training on audit logs.
-
AI Trust OS -- A Continuous Governance Framework for Autonomous AI Observability and Zero-Trust Compliance in Enterprise Environments
AI Trust OS is a proposed always-on operating layer that discovers undocumented AI systems via telemetry and produces continuous zero-trust compliance artifacts for regulations including ISO 42001, EU AI Act, SOC 2, G...
-
LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods
A survey that organizes LLMs-as-judges research into functionality, methodology, applications, meta-evaluation, and limitations.
Reference graph
Works this paper leans on
-
[1]
Persistent anti-muslim bias in large language models
Abubakar Abid, Maheen Farooqi, and James Zou. Persistent anti-muslim bias in large language models. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society , pages 298–306, 2021
work page 2021
-
[2]
Evaluating the susceptibility of pre-trained language models via handcrafted adversarial examples,
Hezekiah J Branch, Jonathan Rodriguez Cefalu, Jeremy McHugh, Leyla Hujer, Aditya Bahl, Daniel del Castillo Iglesias, Ron Heichman, and Ramesh Darwishi. Evaluating the suscepti- bility of pre-trained language models via handcrafted adversarial examples. arXiv preprint arXiv:2209.02128, 2022
-
[3]
Language models are few-shot learners
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems , 33:1877–1901, 2020
work page 1901
-
[4]
Extracting training data from large language models
Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-V oss, Kather- ine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. Extracting training data from large language models. In 30th USENIX Security Symposium (USENIX Security 21) , pages 2633–2650, 2021
work page 2021
-
[5]
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018
work page internal anchor Pith review Pith/arXiv arXiv 2018
-
[6]
Ismael Garrido-Muñoz, Arturo Montejo-Ráez, Fernando Martínez-Santiago, and L Alfonso Ureña-López. A survey on bias in deep nlp. Applied Sciences, 11(7):3184, 2021
work page 2021
-
[7]
Realtoxicityprompts: Evaluating neural toxic degeneration in language models
Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A Smith. REALTOX- ICITYPROMPTS: Evaluating neural toxic degeneration in language models. arXiv preprint arXiv:2009.11462, 2020. 6
-
[8]
Riley Goodside. Exploiting GPT-3 prompts with malicious inputs that order the model to ignore its previous directions., Sep 2022. URL https://web.archive.org/web/ 20220919192024/https://twitter.com/goodside/status/1569128808308957185
-
[9]
X-risk analysis for ai research
Dan Hendrycks and Mantas Mazeika. X-risk analysis for ai research. arXiv preprint arXiv:2206.05862, 2022
-
[10]
Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. Risks from learned optimization in advanced machine learning systems. arXiv preprint arXiv:1906.01820, 2019
-
[11]
Aligning language models to follow instructions, Jan
Ryan Lowe and Jan Leike. Aligning language models to follow instructions, Jan
-
[13]
A holistic approach to undesired content detection.arXiv preprint arXiv:2208.03274, 2022
Todor Markov, Chong Zhang, Sandhini Agarwal, Tyna Eloundou, Teddy Lee, Steven Adler, Angela Jiang, and Lilian Weng. A holistic approach to undesired content detection in the real world. arXiv preprint arXiv:2208.03274, 2022
-
[14]
The radicalization risks of GPT-3 and advanced neural language models
Kris McGuffie and Alex Newhouse. The radicalization risks of GPT-3 and advanced neural language models. arXiv preprint arXiv:2009.06807, 2020
-
[15]
Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp
John Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 119–126, 2020
work page 2020
-
[17]
OpenAI. OpenAI API - examples, 2022. URL https://web.archive.org/web/ 20220928211844/https://beta.openai.com/examples/
work page 2022
-
[19]
OpenAI. Models - OpenAI API, 2022. URL http://archive.today/2022.10. 28-122238/https://beta.openai.com/docs/models/gpt-3
work page 2022
-
[20]
Training language models to follow instructions with human feedback
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155, 2022
work page internal anchor Pith review Pith/arXiv arXiv 2022
-
[21]
Agent-based model characterization using natural language processing
Jose J Padilla, David Shuttleworth, and Kevin O’Brien. Agent-based model characterization using natural language processing. In 2019 Winter Simulation Conference (WSC) , pages 560–571. IEEE, 2019
work page 2019
-
[22]
GrIPS: Gradient-free, edit-based instruction search for prompting large language models, 2023
Archiki Prasad, Peter Hase, Xiang Zhou, and Mohit Bansal. Grips: Gradient-free, edit-based instruction search for prompting large language models. arXiv preprint arXiv:2203.07281 , 2022
-
[23]
Exploring the limits of transfer learning with a unified text-to-text transformer
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1–67, 2020
work page 2020
- [24]
-
[25]
Yoshija Walter. A case report on the "A.I. locked-in problem": social concerns with modern NLP. arXiv preprint arXiv:2209.12687, 2022. 7
-
[26]
GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model
Ben Wang and Aran Komatsuzaki. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. https://github.com/kingoflolz/mesh-transformer-jax, May 2021
work page 2021
-
[27]
Ethical and social risks of harm from Language Models
Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, et al. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359, 2021
work page internal anchor Pith review arXiv 2021
-
[29]
I missed this one: Someone did get a prompt leak attack to work against the bot, Sep 2022
Simon Willison. I missed this one: Someone did get a prompt leak attack to work against the bot, Sep 2022. URL https://web.archive.org/web/20220924105826/https://twitter. com/simonw/status/1570933190289924096
-
[30]
Identifying adversarial attacks on text classifiers
Zhouhang Xie, Jonathan Brophy, Adam Noack, Wencong You, Kalyani Asthana, Carter Perkins, Sabrina Reis, Sameer Singh, and Daniel Lowd. Identifying adversarial attacks on text classifiers. arXiv preprint arXiv:2201.08555, 2022
-
[31]
OpenAttack: An Open-source Textual Adversarial Attack Toolkit
Guoyang Zeng, Fanchao Qi, Qianrui Zhou, Tingji Zhang, Bairu Hou, Yuan Zang, Zhiyuan Liu, and Maosong Sun. OpenAttack: An Open-source Textual Adversarial Attack Toolkit. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations...
-
[32]
OPT: Open Pre-trained Transformer Language Models
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. OPT: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022. 8 Appendices A X-Risk Analysis We use the same x-risk analysis template as introduced by Hendrycks and Mazeika [9]. Indivi...
work page internal anchor Pith review Pith/arXiv arXiv 2022
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.