pith. machine review for the scientific record. sign in

arxiv: 2512.16280 · v3 · submitted 2025-12-18 · 💻 cs.CR · cs.AI· cs.CY

Recognition: unknown

Love, Lies, and Language Models: Investigating AI's Role in Romance-Baiting Scams

Authors on Pith no claims yet
classification 💻 cs.CR cs.AIcs.CY
keywords scamscamsautomationromance-baitingemotionalfiltershumanlabor
0
0 comments X
read the original abstract

Romance-baiting scams have become a major source of financial and emotional harm worldwide. These operations are run by organized crime syndicates that traffic thousands of people into forced labor, requiring them to build emotional intimacy with victims over weeks of text conversations before pressuring them into fraudulent cryptocurrency investments. Because the scams are inherently text-based, they raise urgent questions about the role of Large Language Models (LLMs) in both current and future automation. We investigate this intersection by interviewing 145 insiders and 5 scam victims, performing a blinded long-term conversation study comparing LLM scam agents to human operators, and executing an evaluation of commercial safety filters. Our findings show that LLMs are already widely deployed within scam organizations, with 87% of scam labor consisting of systematized conversational tasks readily susceptible to automation. In a week-long study, an LLM agent not only elicited greater trust from study participants (p=0.007) but also achieved higher compliance with requests than human operators (46% vs. 18% for humans). Meanwhile, popular safety filters detected 0.0% of romance baiting dialogues. Together, these results suggest that romance-baiting scams may be amenable to full-scale LLM automation, while existing defenses remain inadequate to prevent their expansion.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Synthetic Trust Attacks: Modeling How Generative AI Manipulates Human Decisions in Social Engineering Fraud

    cs.CR 2026-04 unverdicted novelty 6.0

    The paper proposes Synthetic Trust Attacks (STAs) as a formal threat model with an eight-stage attack chain (STAM) that shifts defense focus from detecting synthetic media to protecting human decision processes in soc...