pith. machine review for the scientific record. sign in

arxiv: 2605.13069 · v2 · submitted 2026-05-13 · 💻 cs.CY

Recognition: no theorem link

Not All Anquan Is the Same: A Terminological Proposal for Chinese Computer Science and Engineering

Authors on Pith no claims yet

Pith reviewed 2026-05-15 06:04 UTC · model grok-4.3

classification 💻 cs.CY
keywords safety and security terminologyChinese computer sciencerisk analysisstandards interpretationAI governanceassurance argumentationfunctional safetycybersecurity
0
0 comments X

The pith

Chinese technical discourse should use 'anbao' for security and reserve 'anquan' for safety to avoid conceptual compression.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

In Chinese computer science and engineering the single word 'anquan' has long translated both safety and security. This creates ongoing compression when standards, risk assessments, and research arguments need to separate non-adversarial harm from adversarial threats. The paper proposes that scholarly and engineering writing adopt 'anbao' for security while keeping 'anquan' mainly for safety, leaving existing legal and standards titles unchanged. The distinction is presented as necessary for clearer interdisciplinary work, more precise risk analysis, and arguments that can be examined in areas such as functional safety, automotive cybersecurity, and AI governance. A staged dual-track practice is outlined for moving to the new usage.

Core claim

The paper claims that the overloaded use of 'anquan' for both safety and security produces persistent conceptual compression that affects standards interpretation, risk analysis, and the examinability of scientific arguments in Chinese technical discourse. Surveying boundaries in international and Chinese standards and tracing effects on functional safety, SOTIF, information security, and AI governance, it shows why precise terminology matters for assurance and co-assurance work. The central proposal is therefore to translate security as 'anbao' and reserve 'anquan' for safety in scholarly and engineering writing, while retaining established legal titles, and to implement this through staged

What carries the argument

The proposed terminological split that assigns 'anquan' to safety and 'anbao' to security, which separates the two concepts so that risk communication and assurance arguments can be stated without overload.

If this is right

  • Standards interpretation in Chinese contexts would align more closely with the distinct safety and security boundaries used in international documents.
  • Risk analysis could separate non-adversarial harm from adversarial threats without repeated clarification.
  • Arguments in AI assurance and safety-security co-assurance would become easier to examine and challenge.
  • Interdisciplinary collaboration in engineering projects would face fewer translation-induced misunderstandings.
  • A staged dual-track writing practice would allow gradual adoption without disrupting existing legal titles.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same distinction might reduce translation friction when Chinese policy documents reference global AI safety frameworks.
  • Standards bodies could later embed the split directly into new Chinese national standards rather than relying on individual writing choices.
  • Other technical fields in Chinese that currently merge safety and security terms might adopt parallel splits once the computer-science precedent is established.

Load-bearing premise

The current single-term usage of 'anquan' creates persistent conceptual compression that materially affects standards interpretation, risk analysis, and the examinability of scientific arguments in Chinese technical discourse.

What would settle it

Compare a matched set of Chinese technical papers and standards written before and after consistent adoption of 'anbao' for security; a measurable drop in documented misreadings of safety versus security requirements or in reviewer requests for clarification would support the claim, while unchanged rates of ambiguity would challenge it.

read the original abstract

In Chinese computer science and engineering, safety and security have long been translated by the same word, "anquan". This convention is concise in ordinary communication, but it creates persistent conceptual compression in standards interpretation, interdisciplinary collaboration, risk analysis and academic writing. When researchers need to discuss both whether a system is free from intolerable non-adversarial harm and whether it can resist adversarial threats, the single word "anquan" often cannot carry the distinction. This article argues that, while established legal and standards titles should be retained, scholarly and engineering writing should translate security as "anbao", and reserve "anquan" mainly for safety. This is not a cosmetic translation preference, but a proposal for terminological governance in scientific cognition, engineering risk communication and assurance argumentation. The article first surveys the conceptual boundary between safety and security in international and Chinese standards, and analyzes how the current translation overload affects functional safety, SOTIF, information security, cybersecurity, automotive cybersecurity and AI governance. It then uses recent work on AI assurance, safety-security co-assurance and security-informed safety to show why precise terminology is fundamental to scientific arguments that can be examined, challenged and communicated. Finally, it proposes a staged, dual-track writing practice for Chinese technical discourse.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript proposes distinguishing safety and security in Chinese computer science and engineering by retaining 'anquan' primarily for safety while introducing 'anbao' for security. It argues that the current single-term usage creates persistent conceptual compression in standards interpretation, interdisciplinary collaboration, risk analysis, and assurance argumentation. The paper surveys conceptual boundaries in international and Chinese standards, analyzes impacts on functional safety, SOTIF, cybersecurity, automotive cybersecurity, and AI governance, draws on recent work on AI assurance and safety-security co-assurance to underscore the need for precise terminology, and advocates a staged dual-track writing practice while preserving established legal and standards titles.

Significance. If the core assumption holds, the proposal could strengthen the examinability of scientific arguments and risk communication in Chinese technical discourse, especially in domains requiring safety-security co-assurance. The manuscript earns credit for grounding its distinctions in external standards and prior literature rather than ad-hoc invention, providing a clear framework for terminological governance.

major comments (2)
  1. [Survey of standards and analysis of effects] The central claim that single-term usage of 'anquan' produces material conceptual compression affecting standards interpretation and engineering practice lacks concrete instances. No specific clause from GB/T standards, ISO equivalents, or other documents is cited where the term leads to divergent expert readings or downstream errors (see the survey and analysis sections).
  2. [Effects on functional safety, SOTIF, and AI governance] The qualitative assessment of impacts on functional safety, SOTIF, and AI governance remains unquantified; the paper does not supply case studies or documented examples demonstrating that context and surrounding technical language fail to recover the intended distinction (see the sections on effects and AI assurance).
minor comments (2)
  1. [Proposal for staged writing practice] Clarify the exact scope of the proposed dual-track practice (e.g., which document types fall under scholarly vs. legal retention) to avoid ambiguity in implementation guidance.
  2. Ensure all Chinese terms are accompanied by pinyin and English glosses on first use for accessibility to non-Chinese readers.

Simulated Author's Rebuttal

2 responses · 1 unresolved

We thank the referee for the constructive comments on our manuscript. We address each major comment below, clarifying our approach while agreeing to strengthen specific aspects of the evidence in revision.

read point-by-point responses
  1. Referee: The central claim that single-term usage of 'anquan' produces material conceptual compression affecting standards interpretation and engineering practice lacks concrete instances. No specific clause from GB/T standards, ISO equivalents, or other documents is cited where the term leads to divergent expert readings or downstream errors (see the survey and analysis sections).

    Authors: We acknowledge that citing explicit clauses would make the compression claim more tangible. The current survey section delineates boundaries across GB/T 22239, ISO 26262, and ISO/SAE 21434 but stops short of quoting specific ambiguous passages. In the revised manuscript we will insert direct quotations from GB/T 22239-2019 clause 6.2 and ISO 26262-1:2018 clause 3.1.1, showing how 'anquan' is applied to both non-adversarial and adversarial harms within the same normative text, thereby illustrating the interpretive risk. revision: yes

  2. Referee: The qualitative assessment of impacts on functional safety, SOTIF, and AI governance remains unquantified; the paper does not supply case studies or documented examples demonstrating that context and surrounding technical language fail to recover the intended distinction (see the sections on effects and AI assurance).

    Authors: The manuscript is a terminological proposal grounded in conceptual analysis and co-assurance literature rather than an empirical study; therefore it does not claim to quantify error rates. We will add two short illustrative vignettes drawn from automotive cybersecurity standards language to show how surrounding context can still leave the safety-security boundary under-specified. We cannot, however, supply publicly documented real-world failures whose root cause is provably terminological alone. revision: partial

standing simulated objections not resolved
  • Supplying documented, publicly attributable case studies in which terminological overlap of 'anquan' is shown to be the direct cause of engineering errors or miscommunication, as no such isolated instances appear in the open literature.

Circularity Check

0 steps flagged

No circularity; terminological proposal grounded in external standards and literature

full rationale

The paper advances a recommendation for distinguishing 'anquan' (safety) from 'anbao' (security) in Chinese technical writing. It surveys conceptual boundaries in international and Chinese standards (e.g., functional safety, SOTIF, cybersecurity), references prior work on AI assurance and safety-security co-assurance, and proposes a dual-track writing practice. No equations, fitted parameters, self-definitional reductions, or load-bearing self-citations appear. The central claim is a normative proposal for clarity in risk communication, not a result derived from its own inputs by construction. External standards distinctions and cited literature provide independent grounding, so the argument does not reduce to renaming or self-reference.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The proposal rests on the domain assumption that terminological precision directly supports examinable scientific arguments and better risk communication; no free parameters or invented entities are introduced.

axioms (1)
  • domain assumption Precise terminology is fundamental to scientific arguments that can be examined, challenged and communicated.
    Invoked when linking terminology to AI assurance and security-informed safety.

pith-pipeline@v0.9.0 · 5518 in / 1063 out tokens · 41138 ms · 2026-05-15T06:04:01.689909+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

64 extracted references · 64 canonical work pages

  1. [1]

    ൮Ցԛགྷ safetyࠇsecurity ൈЌ਽ႇ໓ওᇿb২ೂoνಆྟč safetyĎ p oνЌྟ čsecurityĎ p b 2.଀b২ೂಯཿoGB/T 22239uྐ༏ν ॖඪૼoЧ໓ӫః෮උਵთູྐ༏ν Ќ/ຩ઎νЌp b 3.൐Ⴈ oνಆp ᆷսsecurityb ೏ં໓ఃൌษં੐ ଆđႪ༵ཿoνЌྟp b

  2. [2]

    China-Peru bilateral security liaison mechanism

    ๝ൈษં safety ა securityეᇏཿ oνಆ-༅p b 5.࡯ܱ ಒඌეb 6.࿃ቔᆀ೏ҐႨoνಆྟ/х ᅚष safety/security࡟ ᇏ໓࿐ඌეਘb 7.ླေॖ ҂ᆺ൞৫ӆ࿆࿽b 9ં Safety ა securityؽ ᇅ ༢๤b ৳a໾৘о b ࡼsecurity൞ູਔ э໡ૌିิԛ൉હ໙ีa Ⴕ଀ ༯၂௉ં໓a ༯ a ༯၂ᅦඌეі৚đ Ϝsafety ཿӮ oνಆྟp đ Ϝsecurity ཿӮ oνЌྟp b дνಆđ༵Ֆඪౢԣष൓b 9ં 14 English Version Not All anquan Is the Same: A Terminological Proposal for Chinese Computer Science and Engineer...

  3. [3]

    For example, νಆྟčsafetyĎ and νЌྟčsecurityĎ

    At first use of safety or security, keep the English term in parentheses. For example, νಆྟčsafetyĎ and νЌྟčsecurityĎ

  4. [4]

    For example, still cite GB/T 22239 asЧေ౰ , and then explain that this article treats its field as ྐ༏νЌ/ຩ઎νЌ

    Do not rename official titles of laws, standards or institutions. For example, still cite GB/T 22239 asЧေ౰ , and then explain that this article treats its field as ྐ༏νЌ/ຩ઎νЌ

  5. [5]

    If the paper is about vulnerabilities, attacks, authentication, access control or threat modeling, prefer νЌྟ

    In titles, abstracts and keywords, avoid using νಆ alone to mean security. If the paper is about vulnerabilities, attacks, authentication, access control or threat modeling, prefer νЌྟ

  6. [6]

    When discussing both safety and security, write νಆྟაνЌྟ, or use phrases such as νಆ-༅

  7. [7]

    Popular communication may continue to say ຩ઎νಆ; academic prose should be precise where the distinction matters

    Be tolerant of established public usage. Popular communication may continue to say ຩ઎νಆ; academic prose should be precise where the distinction matters

  8. [8]

    18 CONCLUSION 31

    Later authors who adopt the νಆྟ/νЌྟ distinction may cite this article in their first terminology note, thereby avoiding repeated translation arguments and helping the usage accumulate as searchable and reusable Chinese scholarly corpus. 18 CONCLUSION 31

  9. [9]

    Terminological reform needs searchable usage, not only a mani- festo

    Accumulate examples in bilingual glossaries, dataset labels, course syllabi and re- view comments. Terminological reform needs searchable usage, not only a mani- festo. 18 Conclusion Safety and security are both important. Both can injure people, damage property, undermine trust and disrupt society. But their engineering logics differ. One begins with haz...

  10. [10]

    Prime Minister launches new AI Safety Institute

    Prime Minister’s Office, 10 Downing Street and Department for Science, Innovation and Technology. Prime Minister launches new AI Safety Institute . Nov. 2, 2023. url: https : / / www . gov . uk / government / news / prime - minister - launches - new-ai-safety-institute (visited on 05/08/2026)

  11. [11]

    AI Safety Institute: overview

    Department for Science, Innovation and Technology and AI Safety Institute. AI Safety Institute: overview . Nov. 2, 2023. url: https://www.gov.uk/government/ publications/ai-safety-institute-overview (visited on 05/08/2026)

  12. [12]

    Tackling AI security risks to unleash growth and deliver Plan for Change

    Department for Science, Innovation and Technology and AI Security Institute. Tackling AI security risks to unleash growth and deliver Plan for Change . Feb. 14,

  13. [13]

    REFERENCES / ҕॉ໓ང 32 [4]҆ .ၰ

    url: https://www.gov.uk/government/news/tackling-ai-security- risks - to - unleash - growth - and - deliver - plan - for - change (visited on 05/08/2026). REFERENCES / ҕॉ໓ང 32 [4]҆ .ၰ . ሁ૝ ܶMay 14, 2022. url: https://www.mfa.gov.cn/zwbd_673032/gzhd_ 673042/202205/t20220514_10686108.shtml (visited on 05/12/2026)

  14. [14]

    Tractatus Logico-Philosophicus

    Ludwig Wittgenstein. Tractatus Logico-Philosophicus. Trans. by C. K. Ogden. London: Kegan Paul, Trench, Trubner & Co., 1922. url: https : / / www . wittgensteinproject.org/w/index.php/Tractatus_Logico-Philosophicus_ (English) (visited on 05/08/2026)

  15. [15]

    Language, Thought, and Reality: Selected Writings of Ben- jamin Lee Whorf

    Benjamin Lee Whorf. Language, Thought, and Reality: Selected Writings of Ben- jamin Lee Whorf . Ed. by John B. Carroll. Cambridge, MA: MIT Press, 1956

  16. [16]

    John A. Lucy. Sapir-Whorf Hypothesis . Routledge Encyclopedia of Philosophy

  17. [17]

    4324 / 9780415249126 - U051 - 1

    doi: 10 . 4324 / 9780415249126 - U051 - 1 . url: https : / / www . rep . routledge.com/articles/thematic/sapir-whorf-hypothesis/v-1 (visited on 05/08/2026)

  18. [18]

    Bowker and Susan Leigh Star

    Geoffrey C. Bowker and Susan Leigh Star. Sorting Things Out: Classification and Its Consequences. Cambridge, MA: MIT Press, 1999

  19. [19]

    Basic Concepts and Taxonomy of Dependable and Secure Computing

    Algirdas A vizienis et al. “Basic Concepts and Taxonomy of Dependable and Secure Computing” . In: IEEE Transactions on Dependable and Secure Computing 1.1 (2004), pp. 11–33. doi: 10.1109/TDSC.2004.2

  20. [20]

    ISO/IEC Guide 51:2014 Safety aspects – Guidelines for their inclu- sion in standards

    International Organization for Standardization and International Electrotechnical Commission. ISO/IEC Guide 51:2014 Safety aspects – Guidelines for their inclu- sion in standards . Apr. 2014. url: https://www.iso.org/standard/53940.html (visited on 05/08/2026)

  21. [21]

    On the Change of the Definition of “Safety

    Atsuo Kishimoto and Yusuke Hirai. “On the Change of the Definition of “Safety” in the ISO/IEC Guide 51” . In: Japanese Journal of Risk Analysis 24.4 (2015), pp. 239–242. doi: 10.11447/sraj.24.239 . url: https://www.jstage.jst.go. jp/article/sraj/24/4/24_239/_article (visited on 05/08/2026)

  22. [22]

    Nancy G. Leveson. Engineering a Safer World: Systems Thinking Applied to Safety . Cambridge, MA: MIT Press, 2012. doi: 10.7551/mitpress/8179.001.0001. url: https://direct.mit.edu/books/oa-monograph/2908/Engineering-a-Safer- WorldSystems-Thinking-Applied (visited on 05/08/2026)

  23. [23]

    IEC 61508-1:2010 Functional safety of electrical/electronic/programmable electronic safety-related systems – Part 1: General requirements

    International Electrotechnical Commission. IEC 61508-1:2010 Functional safety of electrical/electronic/programmable electronic safety-related systems – Part 1: General requirements . Apr. 30, 2010. url: https : / / webstore . iec . ch / en / publication/5515 (visited on 05/08/2026). REFERENCES / ҕॉ໓ང 33

  24. [24]

    IEC 61508-3:2010 Functional safety of electrical/electronic/programmable electronic safety-related systems – Part 3: Software requirements

    International Electrotechnical Commission. IEC 61508-3:2010 Functional safety of electrical/electronic/programmable electronic safety-related systems – Part 3: Software requirements. Apr. 30, 2010. url: https : / / webstore . iec . ch / en / publication/5517 (visited on 05/08/2026). [15]৘ሹअ and߶. GB/T 20438.1– 2017గ/ሰ/ֻ1ğ၂Ϯေ౰ . Dec. 29, 2017. url: https :...

  25. [25]

    ISO 26262-1:2018 Road vehicles – Functional safety – Part 1: Vocabulary

    International Organization for Standardization. ISO 26262-1:2018 Road vehicles – Functional safety – Part 1: Vocabulary . Dec. 2018. url: https://www.iso.org/ standard/68383.html (visited on 05/08/2026). [17]৘ሹअ and߶.GB/T 34590.1–2022֡ ֻ1ğඌე. Dec. 30, 2022. url: https://www.biaozhun. org/html/270142.html (visited on 05/08/2026)

  26. [26]

    ISO 21448:2022 Road vehicles – Safety of the intended functionality

    International Organization for Standardization. ISO 21448:2022 Road vehicles – Safety of the intended functionality . June 2022. url: https : / / www . iso . org / standard/77490.html (visited on 05/08/2026). [19]৘ሹअ and߶. GB/T 43267–2023֡ ିνಆ. Nov. 27, 2023. url: https://www.spc.org.cn/online/ 32c38d08cd61efff0817ae3da57a1db4.html (visited on 05/08/2026)

  27. [27]

    ISO/IEC 27000:2018 Information technology – Security techniques – Information security management systems – Overview and vocabulary

    International Organization for Standardization and International Electrotechnical Commission. ISO/IEC 27000:2018 Information technology – Security techniques – Information security management systems – Overview and vocabulary . Feb. 2018. url: https://www.iso.org/standard/73906.html (visited on 05/08/2026)

  28. [28]

    ISO/IEC 27001:2022 Information security, cybersecurity and privacy protection – Information security management systems – Requirements

    International Organization for Standardization and International Electrotechnical Commission. ISO/IEC 27001:2022 Information security, cybersecurity and privacy protection – Information security management systems – Requirements . Oct. 2022. url: https://www.iso.org/standard/82875.html (visited on 05/08/2026)

  29. [29]

    Saltzer and Michael D

    Jerome H. Saltzer and Michael D. Schroeder. “The Protection of Information in Computer Systems” . In: Proceedings of the IEEE 63.9 (1975), pp. 1278–1308. doi: 10.1109/PROC.1975.9939

  30. [30]

    Security Engineering: A Guide to Building Dependable Distributed Systems

    Ross Anderson. Security Engineering: A Guide to Building Dependable Distributed Systems. 3rd ed. Wiley, 2020. doi: 10.1002/9781119644682. url: https://www. cl.cam.ac.uk/~rja14/book.html (visited on 05/08/2026). REFERENCES / ҕॉ໓ང 34

  31. [31]

    Computer Security: Art and Science

    Matt Bishop. Computer Security: Art and Science . 2nd ed. Addison-Wesley, 2018. url: https : / / www . pearson . com / en - us / subject - catalog / p / computer - security-art-and-science/P200000000134 (visited on 05/08/2026)

  32. [32]

    The NIST Cybersecurity Framework (CSF) 2.0

    Cherilyn Pascoe, Stephen Quinn, and Karen Scarfone. The NIST Cybersecurity Framework (CSF) 2.0 . NIST Cybersecurity White Paper NIST CSWP 29. Na- tional Institute of Standards and Technology, Feb. 26, 2024. doi: 10.6028/NIST. CSWP.29. url: https://www.nist.gov/publications/nist-cybersecurity- framework-csf-20 (visited on 05/08/2026)

  33. [34]

    Ross et al

    Ronald S. Ross et al. Developing Cyber-Resilient Systems: A Systems Security En- gineering Approach. Special Publication NIST SP 800-160 Vol. 2 Rev. 1. National Institute of Standards and Technology, Dec. 2021. doi: 10.6028/NIST.SP.800- 160v2r1. url: https : / / csrc . nist . gov / pubs / sp / 800 / 160 / v2 / r1 / final (visited on 05/08/2026)

  34. [35]

    Eliciting Security Requirements with Misuse Cases

    Guttorm Sindre and Andreas L. Opdahl. “Eliciting Security Requirements with Misuse Cases” . In: Requirements Engineering 10.1 (2005), pp. 34–44. doi: 10 . 1007/s00766-004-0194-4

  35. [36]

    Attack Trees

    Bruce Schneier. “Attack Trees” . In: Dr. Dobb’s Journal 24.12 (1999), pp. 21–29. url: https : / / www . schneier . com / academic / archives / 1999 / 12 / attack _ trees.html (visited on 05/08/2026)

  36. [37]

    ISO/SAE 21434:2021 Road vehicles – Cybersecurity engineering

    International Organization for Standardization and SAE International. ISO/SAE 21434:2021 Road vehicles – Cybersecurity engineering . Aug. 2021. doi: 10.4271/ ISO/SAE21434. url: https://www.iso.org/standard/70918.html (visited on 05/08/2026)

  37. [38]

    UN Regulation No

    United Nations Economic Commission for Europe. UN Regulation No. 155 – Cyber security and cyber security management system . 2021. url: https://unece.org/ transport/documents/2021/03/standards/un- regulation- no- 155- cyber- security-and-cyber-security (visited on 05/08/2026)

  38. [39]

    UN Regulation No

    United Nations Economic Commission for Europe. UN Regulation No. 156 – Software update and software update management system . 2021. url: https : REFERENCES / ҕॉ໓ང 35 //unece.org/transport/documents/2021/03/standards/un-regulation-no- 156-software-update-and-software-update (visited on 05/08/2026)

  39. [40]

    ISO 24089:2023 Road vehicles – Software update engineering

    International Organization for Standardization. ISO 24089:2023 Road vehicles – Software update engineering . Feb. 2023. url: https://www.iso.org/standard/ 77796.html (visited on 05/08/2026)

  40. [41]

    IEC TS 62443-1-1:2009 Industrial communication networks – Network and system security – Part 1-1: Terminol- ogy, concepts and models

    International Electrotechnical Commission. IEC TS 62443-1-1:2009 Industrial communication networks – Network and system security – Part 1-1: Terminol- ogy, concepts and models . July 30, 2009. url: https://webstore.iec.ch/en/ publication/7029 (visited on 05/08/2026)

  41. [42]

    IEC 62443-2-1:2024 Security for in- dustrial automation and control systems – Part 2-1: Security program requirements for IACS asset owners

    International Electrotechnical Commission. IEC 62443-2-1:2024 Security for in- dustrial automation and control systems – Part 2-1: Security program requirements for IACS asset owners . 2024. url: https://webstore.iec.ch/en/publication/ 62883 (visited on 05/08/2026). [36]৘ሹअ and߶. GB/T 25069–2022 ྐ ඌඌე. Mar. 9, 2022. url: https://openstd.samr.gov.cn/bzgk/s...

  42. [43]

    Where AI Assurance Might Go Wrong: Ini- tial Lessons from Engineering of Critical Systems

    Robin Bloomfield and John Rushby. “Where AI Assurance Might Go Wrong: Ini- tial Lessons from Engineering of Critical Systems” . In: Proceedings of UK AI Safety Institute Conference on Frontier AI Safety Frameworks (F AISC 24). Berkeley, CA, Nov. 2024. doi: 10.48550/arXiv.2502.03467 . url: https://www.csl.sri. com/users/rushby/papers/faisc24.pdf (visited o...

  43. [44]

    An Assurance Framework for Independent Co- assurance of Safety and Security

    Nikita Johnson and Tim Kelly. “An Assurance Framework for Independent Co- assurance of Safety and Security” . In:Journal of System Safety 54.3 (2018), pp. 32–

  44. [45]

    url: https://jsystemsafety.com/index

    doi: 10.56094/jss.v54i3.62 . url: https://jsystemsafety.com/index. php/jss/article/view/62 (visited on 05/08/2026)

  45. [46]

    Security-Informed Safety: If It’s Not Secure, It’s Not Safe

    Robin Bloomfield, Kateryna Netkachova, and Robert Stroud. “Security-Informed Safety: If It’s Not Secure, It’s Not Safe” . In: Software Engineering for Resilient Systems. Vol. 8166. Lecture Notes in Computer Science. Springer, 2013, pp. 17–

  46. [47]

    url: https://openaccess.city.ac

    doi: 10.1007/978-3-642-40894-6_2 . url: https://openaccess.city.ac. uk/3097/ (visited on 05/08/2026)

  47. [48]

    An Integrated Approach to Safety and Security Based on Systems Theory

    William Young and Nancy G. Leveson. “An Integrated Approach to Safety and Security Based on Systems Theory” . In: Communications of the ACM 57.2 (2014), pp. 31–35. doi: 10.1145/2556938

  48. [49]

    FMVEA for Safety and Security Analysis of Intelligent and Cooperative Vehicles

    Christoph Schmittner, Zhendong Ma, and Paul Smith. “FMVEA for Safety and Security Analysis of Intelligent and Cooperative Vehicles” . In: Computer Safety, Reliability, and Security . Vol. 8696. Lecture Notes in Computer Science. Springer, 2014, pp. 282–288. doi: 10.1007/978-3-319-10557-4_31

  49. [50]

    A Case Study of FMVEA and CHASSIS as Safety and Security Co-analysis Method for Automotive Cyber-physical Systems

    Christoph Schmittner et al. “A Case Study of FMVEA and CHASSIS as Safety and Security Co-analysis Method for Automotive Cyber-physical Systems” . In: Proceedings of the 1st ACM Workshop on Cyber-Physical System Security . ACM, 2015, pp. 69–80. doi: 10.1145/2732198.2732204

  50. [51]

    Safety Assurance of Machine Learning for Autonomous Systems

    Colin Paterson et al. “Safety Assurance of Machine Learning for Autonomous Systems” . In: Reliability Engineering & System Safety 264 (2025), p. 111311. doi: 10.1016/j.ress.2025.111311

  51. [52]

    Reliability Assessment and Safety Arguments for Machine Learning Components in System Assurance

    Yi Dong et al. “Reliability Assessment and Safety Arguments for Machine Learning Components in System Assurance” . In: ACM Transactions on Embedded Comput- ing Systems 22.3, 48 (2023), pp. 1–48. doi: 10.1145/3570918

  52. [53]

    Assurance of AI Systems from a Depend- ability Perspective

    Robin Bloomfield and John Rushby. Assurance of AI Systems from a Depend- ability Perspective. CSL Technical Report SRI-CSL-2024-02R3. SRI International Computer Science Laboratory, June 3, 2025. doi: 10.48550/arXiv.2407.13948. REFERENCES / ҕॉ໓ང 37 url: https://www.csl.sri.com/~rushby/papers/aisafety24.pdf (visited on 05/08/2026)

  53. [54]

    AI Assurance Needs a Systems Engineering Approach

    Robin Bloomfield and John Rushby. “AI Assurance Needs a Systems Engineering Approach” . In:ASSURE 2025, Proceedings of IEEE 36th International Symposium on Software Reliability Engineering Workshops (ISSREW) . Sao Paulo, Brazil, Oct. 2025, pp. 157–158. doi: 10.1109/ISSREW67781.2025.00065. url: https://www. csl.sri.com/~rushby/papers/assure25.pdf (visited ...

  54. [55]

    Artificial Intelligence Risk Management Framework (AI RMF 1.0)

    Elham Tabassi. Artificial Intelligence Risk Management Framework (AI RMF 1.0) . NIST Trustworthy and Responsible AI NIST AI 100-1. National Institute of Stan- dards and Technology, Jan. 26, 2023. doi: 10 . 6028 / NIST . AI . 100 - 1. url: https : / / www . nist . gov / publications / artificial - intelligence - risk - management-framework-ai-rmf-10 (visit...

  55. [56]

    ISO/IEC 23894:2023 Information technology – Artificial intelli- gence – Guidance on risk management

    International Organization for Standardization and International Electrotechni- cal Commission. ISO/IEC 23894:2023 Information technology – Artificial intelli- gence – Guidance on risk management . Feb. 2023. url: https://www.iso.org/ standard/77304.html (visited on 05/08/2026)

  56. [57]

    ISO/IEC 42001:2023 Information technology – Artificial intelligence – Management system

    International Organization for Standardization and International Electrotechnical Commission. ISO/IEC 42001:2023 Information technology – Artificial intelligence – Management system . Dec. 2023. url: https://www.iso.org/standard/81230. html (visited on 05/08/2026)

  57. [58]

    Models are Central to AI Assurance

    Robin Bloomfield and John Rushby. “Models are Central to AI Assurance” . In: AS- SURE 2024, Proceedings of IEEE 35th International Symposium on Software Re- liability Engineering Workshops (ISSREW) . Tsukuba, Japan, Oct. 2024, pp. 199–

  58. [59]

    url: https://www.csl.sri.com/ users/rushby/papers/assure24.pdf (visited on 05/08/2026)

    doi: 10.1109/ISSREW63542.2024.00078. url: https://www.csl.sri.com/ users/rushby/papers/assure24.pdf (visited on 05/08/2026)

  59. [60]

    Assurance 2.0: A Manifesto

    Robin Bloomfield and John Rushby. “Assurance 2.0: A Manifesto” . In: Systems and Covid-19: Proceedings of the 29th Safety-Critical Systems Symposium (SSS’21) . Ed. by Mike Parsons and Mark Nicholson. Safety-Critical Systems Club. York, UK, Feb. 2021, pp. 85–108. doi: 10 . 48550 / arXiv . 2004 . 10474. url: https : / / www . csl . sri . com / ~rushby / pap...

  60. [61]

    Springer Dordrecht, 1988.doi:10.1007/978- 94-009-2871-8

    Robin Bloomfield and John Rushby. “Confidence in Assurance 2.0 Cases” . In: The Practice of Formal Methods: Essays in Honour of Cliff Jones, Part I . Ed. by Ana Cavalcanti and James Baxter. Vol. 14780. Lecture Notes in Computer Science. Springer, 2024, pp. 1–23. doi: 10.1007/978- 3- 031- 66676- 6_1 . url: REFERENCES / ҕॉ໓ང 38 https : / / www . csl . sri ....

  61. [62]

    Quantifying Confidence in Assurance 2.0 Arguments

    Robin Bloomfield and John Rushby. Quantifying Confidence in Assurance 2.0 Arguments. 2026. doi: 10.48550/arXiv.2604.00034 . arXiv: 2604.00034. url: https://arxiv.org/abs/2604.00034 (visited on 05/08/2026)

  62. [63]

    Assurance 2.0

    John Rushby and Robin Bloomfield. Assurance 2.0 . SRI International Com- puter Science Laboratory. url: https : / / www . csl . sri . com / users / rushby / assurance2.0 (visited on 05/08/2026)

  63. [64]

    Security Informed Safety

    National Protective Security Authority. Security Informed Safety . url: https : //www.npsa.gov.uk/security-best-practices/build-it-secure/security- informed-safety (visited on 05/08/2026)

  64. [65]

    From Antagonisms to Synergies: A Systematic Review of Safety-Security Interrelations

    Verena Zimmermann et al. “From Antagonisms to Synergies: A Systematic Review of Safety-Security Interrelations” . In: International Journal of Critical Infrastruc- ture Protection 51 (2025), p. 100808. doi: 10.1016/j.ijcip.2025.100808. url: https://www.sciencedirect.com/science/article/pii/S1874548225000691 (visited on 05/08/2026). [62]߶. مNov. 7, 2016. u...