Recognition: unknown
DecodingTrust-Agent Platform (DTap): A Controllable and Interactive Red-Teaming Platform for AI Agents
Pith reviewed 2026-05-08 16:07 UTC · model grok-4.3
The pith
DTap supplies the first set of controllable simulations and an autonomous attacker to expose vulnerabilities in AI agents.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
We introduce the DecodingTrust-Agent Platform (DTap), the first controllable and interactive red-teaming platform for AI agents, spanning 14 real-world domains and over 50 simulation environments that replicate widely used systems such as Google Workspace, Paypal, and Slack. To scale the risk assessment of agents in DTap, we further propose DTap-Red, the first autonomous red-teaming agent that systematically explores diverse injection vectors (e.g., prompt, tool, skill, environment, combinations) and autonomously discovers effective attack strategies tailored to varying malicious goals. Using DTap-Red, we curate DTap-Bench, a large-scale red-teaming dataset comprising high-quality instances.
What carries the argument
DTap is the collection of controllable simulation environments replicating real tools and workflows; DTap-Red is the autonomous agent that systematically searches across prompt, tool, skill, and environment injection vectors to locate successful attacks.
If this is right
- Evaluations across backbone models and domains reveal systematic vulnerability patterns in current AI agents.
- The platform supports testing against varied security policies and risk categories using a single controllable setup.
- DTap-Bench supplies paired attack instances and automatic validators that can serve as a reusable resource for defense research.
- Attack strategies found by DTap-Red give concrete examples that can guide the design of more robust next-generation agents.
Where Pith is reading between the lines
- Similar simulation-plus-autonomous-attacker designs could be adapted to evaluate other interactive AI systems such as multi-agent workflows or tool-augmented chat interfaces.
- Iterative use of DTap-Red discoveries inside agent training loops might reduce the gap between simulated and real-world robustness without requiring manual red-teaming.
- If the platform's environment library grows, it could become a shared testbed that enables consistent cross-model comparisons of agent security.
Load-bearing premise
The simulated environments replicate the dynamic tool-using and user-interaction behavior of real systems closely enough that attacks discovered inside them will succeed against the same agents when connected to actual external services.
What would settle it
A direct comparison test in which an attack strategy produced by DTap-Red succeeds inside the simulation but fails to manipulate the identical agent when the simulation is replaced by live connections to Google Workspace or Slack.
Figures
read the original abstract
AI agents are increasingly deployed across diverse domains to automate complex workflows through long-horizon and high-stakes action executions. Due to their high capability and flexibility, such agents raise significant security and safety concerns. A growing number of real-world incidents have shown that adversaries can easily manipulate agents into performing harmful actions, such as leaking API keys, deleting user data, or initiating unauthorized transactions. Evaluating agent security is inherently challenging, as agents operate in dynamic, untrusted environments involving external tools, heterogeneous data sources, and frequent user interactions. However, realistic, controllable, and reproducible environments for large-scale risk assessment remain largely underexplored. To address this gap, we introduce the DecodingTrust-Agent Platform (DTap), the first controllable and interactive red-teaming platform for AI agents, spanning 14 real-world domains and over 50 simulation environments that replicate widely used systems such as Google Workspace, Paypal, and Slack. To scale the risk assessment of agents in DTap, we further propose DTap-Red, the first autonomous red-teaming agent that systematically explores diverse injection vectors (e.g., prompt, tool, skill, environment, combinations) and autonomously discovers effective attack strategies tailored to varying malicious goals. Using DTap-Red, we curate DTap-Bench, a large-scale red-teaming dataset comprising high-quality instances across domains, each paired with a verifiable judge to automatically validate attack outcomes. Through DTap, we conduct large-scale evaluations of popular AI agents built on various backbone models, spanning security policies, risk categories, and attack strategies, revealing systematic vulnerability patterns and providing valuable insights for developing secure next-generation agents.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces the DecodingTrust-Agent Platform (DTap), the first controllable and interactive red-teaming platform for AI agents spanning 14 real-world domains and over 50 simulation environments that replicate systems such as Google Workspace, PayPal, and Slack. It proposes DTap-Red, an autonomous red-teaming agent that explores diverse injection vectors (prompt, tool, skill, environment, and combinations) and discovers effective attack strategies, curates the DTap-Bench dataset with high-quality instances paired with verifiable automatic judges, and conducts large-scale evaluations of popular AI agents to reveal systematic vulnerability patterns across security policies, risk categories, and attack strategies.
Significance. If the simulation environments accurately replicate real-world agent dynamics and the evaluations are supported by rigorous quantitative validation, this platform and benchmark would represent a meaningful advance in AI agent security research by enabling scalable, reproducible red-teaming in dynamic, tool-using settings. The autonomous DTap-Red component and the curated dataset with judges could help standardize assessment practices and support development of more secure agents.
major comments (2)
- Abstract: the claim of conducting large-scale evaluations that reveal systematic vulnerability patterns is central to the paper's empirical contribution, yet the abstract (and provided summary) supplies no quantitative results such as attack success rates, error analysis, or validation of the automatic judges. This omission is load-bearing for assessing the strength and reliability of the reported findings.
- Abstract (DTap environment descriptions): the platform's value for real-world red-teaming rests on the assertion that the 50+ simulation environments faithfully replicate dynamic tool use, state changes, and user interactions in systems such as Google Workspace and Slack. No fidelity metrics, API parity tests, or transfer experiments demonstrating that attacks discovered in simulation generalize to actual deployments are mentioned, which directly affects the generalizability of DTap-Red's attack strategies.
minor comments (1)
- Abstract: the abstract is dense and lists multiple contributions in a single paragraph; splitting key claims or adding a brief limitations sentence could improve clarity for readers.
Simulated Author's Rebuttal
We thank the referee for the thoughtful and constructive comments. We address each major comment point by point below, indicating where revisions will be made to strengthen the manuscript.
read point-by-point responses
-
Referee: Abstract: the claim of conducting large-scale evaluations that reveal systematic vulnerability patterns is central to the paper's empirical contribution, yet the abstract (and provided summary) supplies no quantitative results such as attack success rates, error analysis, or validation of the automatic judges. This omission is load-bearing for assessing the strength and reliability of the reported findings.
Authors: We agree that the abstract would benefit from including key quantitative highlights to better convey the empirical strength of the work. The full manuscript reports detailed results in the experiments section, including attack success rates across models and attack vectors as well as validation metrics for the automatic judges. In the revised version, we will update the abstract to concisely summarize representative quantitative findings from the large-scale evaluations while keeping the focus on the platform's contributions. revision: yes
-
Referee: Abstract (DTap environment descriptions): the platform's value for real-world red-teaming rests on the assertion that the 50+ simulation environments faithfully replicate dynamic tool use, state changes, and user interactions in systems such as Google Workspace and Slack. No fidelity metrics, API parity tests, or transfer experiments demonstrating that attacks discovered in simulation generalize to actual deployments are mentioned, which directly affects the generalizability of DTap-Red's attack strategies.
Authors: We acknowledge that explicit fidelity metrics and transfer experiments would further support claims about real-world applicability. The current manuscript details the environment construction based on real API specifications and state management but does not include dedicated fidelity quantification or generalization tests. In the revision, we will add a dedicated subsection on environment design and validation (including API parity where implemented) and explicitly discuss the limitations regarding direct transfer to production deployments. revision: partial
Circularity Check
No circularity: platform, agent, and dataset presented as original constructions
full rationale
The paper introduces DTap as a newly built platform spanning 14 domains and 50+ simulation environments, DTap-Red as an autonomous red-teaming agent, and DTap-Bench as a curated dataset with judges. These are described as constructed artifacts for evaluation, with no equations, fitted parameters, predictions, or derivation steps that reduce by construction to prior inputs, self-citations, or ansatzes. Large-scale evaluations of agents are performed directly on the built system, rendering the work self-contained without load-bearing circular reductions.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Simulation environments can accurately replicate the dynamic interactions, tool usage, and user behaviors of real-world systems such as Google Workspace and Slack.
invented entities (1)
-
DTap-Red
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Dodd-frank wall street reform and consumer protection act
111th United States Congress. Dodd-frank wall street reform and consumer protection act. https://www.congress.gov/bill/111th-congress/house-bill/4173, 2010. Pub.L. 111–203, 124 Stat. 1376
2010
-
[2]
Terms of Service
Airbnb. Terms of Service. https://www.airbnb.com/help/article/2908, 2026. Ac- cessed 2026-04-01
2026
-
[3]
Airbnb’s Content Policy
Airbnb. Airbnb’s Content Policy. https://www.airbnb.com/help/article/546, n.d. Accessed 2026-04-01
2026
-
[4]
Off-Platform and Fee Transparency Policy
Airbnb. Off-Platform and Fee Transparency Policy. https://www.airbnb.com/help/ article/2799, n.d. Accessed 2026-04-01
2026
-
[5]
Model rules of professional conduct
American Bar Association. Model rules of professional conduct. https: //www.americanbar.org/groups/professional_responsibility/publications/ model_rules_of_professional_conduct/, 2024. Accessed: 2026-04-01
2024
-
[6]
Agentharm: A benchmark for measuring harmfulness of llm agents
Maksym Andriushchenko, Alexandra Souly, Mateusz Dziemian, Derek Duenas, Maxwell Lin, Justin Wang, Dan Hendrycks, Andy Zou, J Zico Kolter, Matt Fredrikson, et al. Agentharm: A benchmark for measuring harmfulness of llm agents. InThe Thirteenth International Conference on Learning Representations
-
[7]
Claude agent sdk overview, 2025
Anthropic. Claude agent sdk overview, 2025
2025
-
[8]
Claude code by anthropic, 2026
Anthropic. Claude code by anthropic, 2026. Accessed: 2026-04-23
2026
-
[9]
Claude cowork, 2026
Anthropic. Claude cowork, 2026
2026
-
[10]
Cursor: The best way to code with ai, 2023
Anysphere. Cursor: The best way to code with ai, 2023
2023
-
[11]
Atlassian acceptable use policy
Atlassian. Atlassian acceptable use policy. https://www.atlassian.com/legal/ acceptable-use-policy. Accessed: 2026-03-19
2026
-
[12]
Shieldagent: Shielding agents via verifiable safety policy reasoning
Zhaorun Chen, Mintong Kang, and Bo Li. Shieldagent: Shielding agents via verifiable safety policy reasoning. InForty-second International Conference on Machine Learning
-
[13]
Arms: Adaptive red-teaming agent against multimodal models with plug-and-play attacks
Zhaorun Chen, Xun Liu, Mintong Kang, Jiawei Zhang, Minzhou Pan, Shuang Yang, and Bo Li. Arms: Adaptive red-teaming agent against multimodal models with plug-and-play attacks. arXiv preprint arXiv:2510.02677, 2025. 13
-
[14]
Agentpoison: Red- teaming llm agents via poisoning memory or knowledge bases.Advances in Neural Information Processing Systems, 37:130185–130213, 2024
Zhaorun Chen, Zhen Xiang, Chaowei Xiao, Dawn Song, and Bo Li. Agentpoison: Red- teaming llm agents via poisoning memory or knowledge bases.Advances in Neural Information Processing Systems, 37:130185–130213, 2024
2024
-
[15]
Eu cbrn action plan
Council of the European Union. Eu cbrn action plan. https://data.consilium.europa. eu/doc/document/ST-15505-2009-REV-1/en/pdf, 2009. Accessed: 2026-04-03
2009
-
[16]
Interim measures for the management of generative artifi- cial intelligence services.https://www.cac.gov.cn/2023-07/13/c_1690898327029107
Cyberspace Administration of China. Interim measures for the management of generative artifi- cial intelligence services.https://www.cac.gov.cn/2023-07/13/c_1690898327029107. htm, 2023. Accessed: 2026-04-01
2023
-
[17]
Databricks acceptable use policy
Databricks. Databricks acceptable use policy. https://www.databricks.com/legal/ acceptable-use-policy-fe. Accessed: 2026-03-19
2026
-
[18]
Agentdojo: A dynamic environment to evaluate prompt injection attacks and defenses for llm agents.Advances in Neural Information Processing Systems, 37:82895–82920, 2024
Edoardo Debenedetti, Jie Zhang, Mislav Balunovic, Luca Beurer-Kellner, Marc Fischer, and Florian Tramèr. Agentdojo: A dynamic environment to evaluate prompt injection attacks and defenses for llm agents.Advances in Neural Information Processing Systems, 37:82895–82920, 2024
2024
-
[19]
Regulation (EU) 2019/881 — the EU cybersecurity act
European Union. Regulation (EU) 2019/881 — the EU cybersecurity act. https://eur-lex. europa.eu/eli/reg/2019/881/oj. Accessed: 2026-04-01
2019
-
[20]
The eu artificial intelligence act, 2024
European Union. The eu artificial intelligence act, 2024
2024
-
[21]
Ftc policy statement on deception, 1983
Federal Trade Commission. Ftc policy statement on deception, 1983. Appended toCliffdale Associates, Inc., 103 F.T.C. 110, 174 (1984)
1983
-
[22]
FINRA rules
Financial Industry Regulatory Authority. FINRA rules. https://www.finra.org/ rules-guidance/rulebooks/finra-rules, 2025. Accessed: 2026-04-01
2025
-
[23]
Petri: An open-source auditing tool to accelerate ai safety research.Alignment Science Blog, 10, 2025
K Fronsdal, I Gupta, A Sheshadri, J Michala, S Mcaleer, and R Wang. Petri: An open-source auditing tool to accelerate ai safety research.Alignment Science Blog, 10, 2025
2025
-
[24]
Chrome services acceptable use policy
Google. Chrome services acceptable use policy. https://chromeenterprise.google/ terms/aup/. Accessed: 2026-04-01
2026
-
[25]
Gmail program policies
Google. Gmail program policies. https://support.google.com/mail/answer/ 16734397?hl=en. Accessed: 2026-03-19
2026
-
[26]
Google calendar program policies
Google. Google calendar program policies. https://www.google.com/intl/en_GB/ googlecalendar/program_policies.html. Accessed: 2026-05-04
2026
-
[27]
Google docs editors help: Abuse program policies & enforcement.https://support
Google. Google docs editors help: Abuse program policies & enforcement.https://support. google.com/docs/answer/148505?hl=en. Accessed: 2026-03-19
2026
-
[28]
Agent development kit (adk) documentation, 2025
Google. Agent development kit (adk) documentation, 2025
2025
-
[29]
Not what you’ve signed up for: Compromising real-world llm-integrated applications with indirect prompt injection
Kai Greshake, Sahar Abdelnabi, Shailesh Mishra, Christoph Endres, Thorsten Holz, and Mario Fritz. Not what you’ve signed up for: Compromising real-world llm-integrated applications with indirect prompt injection. InProceedings of the 16th ACM Workshop on Artificial Intelligence and Security, pages 79–90, 2023
2023
-
[30]
Artprompt: Ascii art-based jailbreak attacks against aligned llms
Fengqing Jiang, Zhangchen Xu, Luyao Niu, Zhen Xiang, Bhaskar Ramasubramanian, Bo Li, and Radha Poovendran. Artprompt: Ascii art-based jailbreak attacks against aligned llms. In Proceedings of the 62nd annual meeting of the association for computational linguistics (volume 1: Long papers), pages 15157–15173, 2024
2024
-
[31]
Shade-arena: Evaluating sabotage and monitoring in llm agents.arXiv preprint arXiv:2506.15740, 2025
Jonathan Kutasov, Yuqi Sun, Paul Colognese, Teun van der Weij, Linda Petrini, Chen Bo Calvin Zhang, John Hughes, Xiang Deng, Henry Sleight, Tyler Tracy, et al. Shade-arena: Evaluating sabotage and monitoring in llm agents.arXiv preprint arXiv:2506.15740, 2025
-
[32]
Langchain framework documentation, 2025
LangChain. Langchain framework documentation, 2025. 14
2025
-
[33]
St- webagentbench: A benchmark for evaluating safety and trustworthiness in web agents
Ido Levy, Ben Wiesel, Sami Marreed, Alon Oved, Avi Yaeli, and Segev Shlomov. St- webagentbench: A benchmark for evaluating safety and trustworthiness in web agents. In International Conference on Machine Learning, 2025
2025
-
[34]
arXiv preprint arXiv:2405.02957 , year =
Junkai Li, Yunghwei Lai, Weitao Li, Jingyi Ren, Meng Zhang, Xinhui Kang, Siyu Wang, Peng Li, Ya-Qin Zhang, Weizhi Ma, and Yang Liu. Agent hospital: A simulacrum of hospital with evolvable medical agents.arXiv preprint arXiv:2405.02957, 2024
-
[35]
Zeyi Liao, Jaylen Jones, Linxi Jiang, Yuting Ning, Eric Fosler-Lussier, Yu Su, Zhiqiang Lin, and Huan Sun. Redteamcua: Realistic adversarial testing of computer-use agents in hybrid web-os environments.arXiv preprint arXiv:2505.21936, 2025
-
[36]
Autodan: Generating stealthy jailbreak prompts on aligned large language models
Xiaogeng Liu, Nan Xu, Muhao Chen, and Chaowei Xiao. Autodan: Generating stealthy jailbreak prompts on aligned large language models. InThe Twelfth International Conference on Learning Representations, 2024
2024
-
[37]
Welcome — manus documentation, 2026
Manus. Welcome — manus documentation, 2026. Accessed: 2026-04-23
2026
-
[38]
Cwe top 25 most dangerous software weaknesses
MITRE. Cwe top 25 most dangerous software weaknesses. https://cwe.mitre.org/ top25/archive/2025/2025_cwe_top25.html, 2025. Accessed: 2026-04-03
2025
-
[39]
Mitre att&ck framework
MITRE. Mitre att&ck framework. https://attack.mitre.org/, 2025. Accessed: 2026- 04-03
2025
-
[40]
Mozilla acceptable use policy.https://www.mozilla.org/en-US/about/legal/ acceptable-use/
Mozilla. Mozilla acceptable use policy.https://www.mozilla.org/en-US/about/legal/ acceptable-use/. Accessed: 2026-04-01
2026
-
[41]
Security and privacy controls for information systems and organizations (sp 800-53 rev
National Institute of Standards and Technology (NIST). Security and privacy controls for information systems and organizations (sp 800-53 rev. 5). https://nvlpubs.nist.gov/ nistpubs/SpecialPublications/NIST.SP.800-53r5.pdf, 2020. Accessed: 2026-04- 03
2020
-
[42]
Towards a standard for identify- ing and managing bias in artificial intelligence
National Institute of Standards and Technology (NIST). Towards a standard for identify- ing and managing bias in artificial intelligence. https://nvlpubs.nist.gov/nistpubs/ SpecialPublications/NIST.SP.1270.pdf, 2022. Accessed: 2026-04-03
2022
-
[43]
Personal information protection law of the people’s republic of china
National People’s Congress of China. Personal information protection law of the people’s republic of china. http://www.npc.gov.cn/npc/c30834/202108/ a8c4e3672c74491a80b53a172bb753fe.shtml, 2021. Accessed: 2026-04-01
2021
-
[44]
Nato’s chemical, biological, radiological and nuclear (cbrn) defence policy
NATO. Nato’s chemical, biological, radiological and nuclear (cbrn) defence policy. https://www.nato.int/en/about-us/ official-texts-and-resources/official-texts/2022/06/14/ natos-chemical-biological-radiological-and-nuclear-cbrn-defence-policy ,
2022
-
[45]
Accessed: 2026-04-03
2026
-
[46]
Chatgpt atlas, 2025
OpenAI. Chatgpt atlas, 2025
2025
-
[47]
Openai agents python sdk, 2025
OpenAI. Openai agents python sdk, 2025
2025
-
[48]
Codex | ai coding partner from openai, 2026
OpenAI. Codex | ai coding partner from openai, 2026. Accessed: 2026-04-23
2026
-
[49]
Introducing openclaw, 2026
OpenClaw. Introducing openclaw, 2026
2026
-
[50]
Openclaw — personal ai assistant, 2026
OpenClaw. Openclaw — personal ai assistant, 2026. Accessed: 2026-04-23
2026
-
[51]
Owasp top 10 for large language model applications.https://genai
OWASP Foundation. Owasp top 10 for large language model applications.https://genai. owasp.org/llm-top-10/, 2025. Accessed: 2026-04-03
2025
-
[52]
Paypal acceptable use policy
PayPal. Paypal acceptable use policy. https://www.paypal.com/us/legalhub/paypal/ acceptableuse-full. Accessed: 2026-03-19
2026
-
[53]
Paypal user agreement
PayPal. Paypal user agreement. https://www.paypal.com/us/legalhub/paypal/ useragreement-full. Accessed: 2026-03-19. 15
2026
-
[54]
PCI Data Security Standard (PCI DSS) v4.0.1
PCI Security Standards Council. PCI Data Security Standard (PCI DSS) v4.0.1. https:// docs-prv.pcisecuritystandards.org/PCI%20DSS/Standard/PCI-DSS-v4_0_1.pdf ,
-
[55]
Salesforce artificial intelligence acceptable use policy
Salesforce, Inc. Salesforce artificial intelligence acceptable use policy. https: //www.salesforce.com/en-us/wp-content/uploads/sites/4/documents/legal/ Agreements/policies/ai-acceptable-use-policy.pdf, 2023
2023
-
[56]
Salesforce acceptable use and external-facing services policy
Salesforce, Inc. Salesforce acceptable use and external-facing services policy. https://www.salesforce.com/en-us/wp-content/uploads/sites/4/documents/ legal/Agreements/policies/ExternalFacing_Services_Policy.pdf, 2025
2025
-
[57]
Shopify acceptable use policy, 2024
Shopify, Inc. Shopify acceptable use policy, 2024
2024
-
[58]
Slack acceptable use policy
Slack. Slack acceptable use policy. https://slack.com/acceptable-use-policy. Ac- cessed: 2026-03-19
2026
-
[59]
Snowflake acceptable use policy
Snowflake. Snowflake acceptable use policy. https://www.snowflake.com/en/legal/ addenda/acceptable-use-policy/. Accessed: 2026-03-19
2026
-
[60]
California consumer privacy act / california privacy rights act (CCPA/CPRA)
State of California. California consumer privacy act / california privacy rights act (CCPA/CPRA). https://oag.ca.gov/privacy/ccpa. Accessed: 2026-04-01
2026
-
[61]
Safearena: Evaluating the safety of autonomous web agents
Ada Defne Tur, Nicholas Meade, Xing Han Lù, Alejandra Zambrano, Arkil Patel, Esin Durmus, Spandana Gella, Karolina Stanczak, and Siva Reddy. Safearena: Evaluating the safety of autonomous web agents. InInternational Conference on Machine Learning, pages 60404– 60441. PMLR, 2025
2025
-
[62]
Recommendation on the ethics of artificial intelligence
UNESCO. Recommendation on the ethics of artificial intelligence. https://www. unesco.org/en/artificial-intelligence/recommendation-ethics, 2021. Adopted 23 November 2021
2021
-
[63]
General data protection regulation, 2018
European Union. General data protection regulation, 2018
2018
-
[64]
18 U.S.C
United States Code. 18 U.S.C. § 1001 — statements or entries generally. https://www.law. cornell.edu/uscode/text/18/1001, 1948. Title 18, Part I, Chapter 47
1948
-
[65]
18 U.S.C
United States Code. 18 U.S.C. § 1341 — frauds and swindles. https://www.law.cornell. edu/uscode/text/18/1341, 1948. Title 18, Part I, Chapter 63
1948
-
[66]
18 U.S.C
United States Code. 18 U.S.C. § 1951 — interference with commerce by threats or violence. https://www.law.cornell.edu/uscode/text/18/1951, 1948. Title 18, Part I, Chapter 95 (Hobbs Act)
1951
-
[67]
18 U.S.C
United States Code. 18 U.S.C. § 873 — blackmail. https://www.law.cornell.edu/ uscode/text/18/873, 1948. Title 18, Part I, Chapter 41
1948
-
[68]
18 U.S.C
United States Code. 18 U.S.C. § 1343 — fraud by wire, radio, or television. https://www. law.cornell.edu/uscode/text/18/1343, 1952. Title 18, Part I, Chapter 63
1952
-
[69]
18 U.S.C
United States Code. 18 U.S.C. § 1028 — fraud and related activity in connection with identifi- cation documents. https://www.law.cornell.edu/uscode/text/18/1028, 1982. Title 18, Part I, Chapter 47
1982
-
[70]
Computer fraud and abuse act, 18 U.S.C
United States Congress. Computer fraud and abuse act, 18 U.S.C. §1030. https://www.law. cornell.edu/uscode/text/18/1030. Accessed: 2026-04-01
2026
-
[71]
Electronic communications privacy act, 18 U.S.C
United States Congress. Electronic communications privacy act, 18 U.S.C. §§2511, 2701,
-
[72]
Ac- cessed: 2026-04-01
https://www.law.cornell.edu/uscode/text/18/part-I/chapter-119. Ac- cessed: 2026-04-01
2026
-
[73]
Department of the Treasury
U.S. Department of the Treasury. Bank secrecy act.https://www.fincen.gov/resources/ statutes-and-regulations/bank-secrecy-act, 1970. 31 U.S.C. §§ 5311–5332. 16
1970
-
[74]
Securities and Exchange Commission
U.S. Securities and Exchange Commission. Rule 10b-5: Employment of manipulative and deceptive devices. https://www.law.cornell.edu/cfr/text/17/240.10b-5, 1942. 17 CFR § 240.10b-5
1942
-
[75]
Securities and Exchange Commission
U.S. Securities and Exchange Commission. Regulation S-P: Privacy of consumer financial information. https://www.law.cornell.edu/cfr/text/17/part-248, 2000. 17 CFR Part 248
2000
-
[76]
Securities and Exchange Commission
U.S. Securities and Exchange Commission. Regulation best interest: The broker-dealer standard of conduct. https://www.sec.gov/rules-regulations/2019/06/s7-07-18 , 2019. 17 CFR § 240.15l-1, Release No. 34-86031
2019
-
[77]
Law: Legal agen- tic workflows for custody and fund services contracts
William Watson, Nicole Cho, Nishan Srishankar, Zhen Zeng, Lucas Cecchi, Daniel Scott, Suchetha Siddagangappa, Rachneet Kaur, Tucker Balch, and Manuela Veloso. Law: Legal agen- tic workflows for custody and fund services contracts. InProceedings of the 31st International Conference on Computational Linguistics: Industry Track, pages 583–594, Abu Dhabi, UAE,
-
[78]
Association for Computational Linguistics
-
[79]
Whatsapp business messaging policy
WhatsApp. Whatsapp business messaging policy. https://business.whatsapp.com/ policy. Accessed: 2026-03-19
2026
-
[80]
Whatsapp business terms of service
WhatsApp. Whatsapp business terms of service. https://www.whatsapp.com/legal/ business-terms. Accessed: 2026-03-19
2026
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.