pith. machine review for the scientific record. sign in

arxiv: 2605.07814 · v1 · submitted 2026-05-08 · 💻 cs.CR · cs.SE

Recognition: 2 theorem links

· Lean Theorem

Can I Check What I Designed? Mapping Security Design DSLs to Code Analyzers

Authors on Pith no claims yet

Pith reviewed 2026-05-11 02:48 UTC · model grok-4.3

classification 💻 cs.CR cs.SE
keywords security DSLscode analyzersvulnerability checkssecurity designabstraction gapempirical mappingSecLan modelexpert validation
0
0 comments X

The pith

Security design languages connect to code analyzers through surprisingly few shared concepts.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper examines how security designs created with domain-specific languages relate to the security checks available in automated code analysis tools. Through a review of many such languages and analyzers, it reveals limited overlap in their security concepts. The broad way analyzer checks are often worded creates ambiguous links to design elements. Input from security experts confirms that navigating these connections is complex and challenging. The resulting model and analysis give a clearer picture of why secure design and secure code remain hard to align.

Core claim

The authors establish that design-level security DSLs and implementation-level code analyzers have few commonalities in their security concepts. Checks in analyzers tend to be specified using very general weakness descriptions, which in turn generate a large number of non-obvious potential relationships with design DSL elements. Even security experts find this mapping difficult to handle due to the resulting complexity. By building the SecLan model from shared concepts identified across 66 DSLs and 559 checks from 36 analyzers, and validating it with experts, the work supplies an empirical basis for understanding the abstraction gap between design and implementation.

What carries the argument

The SecLan model, which captures the security concepts common to both design DSLs and code analyzers and serves as the foundation for mapping and analyzing their relationships.

If this is right

  • Designers can better evaluate how implementation vulnerabilities impact their high-level security specifications using the identified mappings.
  • Analyzer developers may improve the specificity of their checks to reduce ambiguity in relating them to designs.
  • Researchers gain a starting point for developing automated tools that bridge the design-implementation security gap.
  • Practitioners receive guidance on which design elements are likely covered by existing code checks and which are not.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Integrated environments that allow checking code against design models directly could reduce reliance on manual mapping.
  • The general weakness descriptions may reflect a broader issue in how security standards are written, suggesting a need for more precise taxonomies.
  • Extending this analysis to emerging DSLs for cloud or AI systems could reveal if the gap persists in new domains.
  • The findings imply that security assurance requires coordinated advances in both design languages and analysis techniques rather than isolated improvements.

Load-bearing premise

The chosen collection of 66 DSLs and 36 analyzers, together with the assessments from the consulted security experts, represent the full range of security design and analysis practices.

What would settle it

Finding a substantially larger number of direct, obvious correspondences between specific design DSL concepts and precise analyzer checks in an expanded study would indicate that the commonalities are greater than reported.

Figures

Figures reproduced from arXiv: 2605.07814 by Frederik Reiche, Kevin Hermann, Robert Heinrich, Sophie Corallo, Sven Peldszus, Thorsten Berger.

Figure 1
Figure 1. Figure 1: Overview of potential relationships between security design and implementation [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Excerpt of a design model with annotations from the SecDFD security DSL, showing the editing of a [PITH_FULL_IMAGE:figures/full_fig_p005_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: SonarQube check results on iTrust 5) P – Permissions: This category comprises checks that identify problems related to permission management, such as missing or problematic permission checks (e.g., PaX [209]) or more permis￾sions being claimed than necessary (e.g., Tang et al. [183]). This category also relates to access control vulnerabilities described by Kulenovic et al. [95]. While vulnerabilities in a… view at source ↗
Figure 4
Figure 4. Figure 4: Visualization of the methodology that are inspected. We started with the list of aspects created in step 1) and refined it to include security analyzers and to identify aspects that are common to both design-time security DSLs and code-level analyzers. 3) Initialize SecLan model. After analyzing the security DSLs and checks, we determined which aspects we could collect information on for both design-time s… view at source ↗
Figure 5
Figure 5. Figure 5: Overview of the survey and interview questions for validation of the SecLan model and exploration of [PITH_FULL_IMAGE:figures/full_fig_p015_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: The SecLan model shows the common security concepts and system elements related to security DSLs [PITH_FULL_IMAGE:figures/full_fig_p017_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Example for an instance of the security model based on the CIA, STRIDE, and exemplary security [PITH_FULL_IMAGE:figures/full_fig_p018_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: System element types relevant in the context of security [PITH_FULL_IMAGE:figures/full_fig_p020_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Application of the SecLan model to SecDFD (Associations with elements from Figs. [PITH_FULL_IMAGE:figures/full_fig_p023_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Application of the SecLan model to security analyzers [PITH_FULL_IMAGE:figures/full_fig_p025_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Relationship between SecDFD and FlowDroid [PITH_FULL_IMAGE:figures/full_fig_p028_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Screenshot showing an excerpt from the description of SecDFD and an explorable view of identified [PITH_FULL_IMAGE:figures/full_fig_p029_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Number of paths between security aspects and checks with a given path length. [PITH_FULL_IMAGE:figures/full_fig_p034_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Lengths of paths vs number of paths result, a greater number of paths not only broadens the attack surface but also heightens the probability that at least one path remains vulnerable in practice [PITH_FULL_IMAGE:figures/full_fig_p034_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: The length of the shortest path at which an element from the SecLan model appears on the path [PITH_FULL_IMAGE:figures/full_fig_p036_15.png] view at source ↗
read the original abstract

When assessing the potential impact of code-level vulnerabilities, e.g., discovered by automated analyzers, it is essential to consider them in the context of the system's security design. However, this is a challenging task due to the abstraction gap between security design, often specified using security DSLs, and implementation. As we will show, even security experts lack a complete understanding of this relationship. Intrigued by this gap (and the general disconnect between secure design and secure implementation) we present a study of 66 design-level security DSLs and 559 security checks from 36 code-level analyzers. We identify what concepts are common to both and capture them in the SecLan model, which has been validated by 22 security experts. Based on this, we investigate the relationship between DSLs and analyzers quantitatively and explore it qualitatively together with 9 security experts. We learn that there are few commonalities between design-level and implementation-level security; security checks are often described by overly general weaknesses, resulting in many non-obvious potential relationships between security DSLs and analyzers; and even security experts are overwhelmed by this complexity. We provide an empirical basis that helps practitioners and researchers better understand the gap and serves as a first step toward bridging it.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript reports on an empirical study involving 66 security design DSLs and 559 security checks from 36 code-level analyzers. It derives a SecLan model of common concepts validated by 22 security experts, then quantitatively maps the DSLs to analyzer checks and qualitatively explores the relationships with 9 experts. The key findings are that there are few commonalities between design-level and implementation-level security, that security checks are often described using overly general weaknesses leading to many non-obvious potential relationships, and that even security experts are overwhelmed by this complexity. The work positions itself as providing an empirical basis to understand the gap between secure design and secure implementation.

Significance. Should the central findings prove robust, this paper makes a meaningful contribution by systematically documenting the disconnect between security design languages and code analysis tools. The SecLan model provides a concrete artifact for future research, and the observation that general weakness descriptions create mapping ambiguities offers a clear target for improving analyzer precision or design DSL expressiveness. The expert involvement lends practical weight to the conclusions, potentially influencing how practitioners approach security from design to implementation.

major comments (2)
  1. [§3] §3 (Data Collection): The selection criteria, search protocol, and inclusion/exclusion rules for the 66 DSLs and 36 analyzers are not documented with sufficient detail or reproducibility (e.g., no explicit coverage of GitHub, academic venues, or commercial tools). This is load-bearing for the central claim of 'few commonalities' because the observed sparsity in mappings and 'many non-obvious potential relationships' could be an artifact of convenience sampling rather than representative of the field.
  2. [§5] §5 (Quantitative Mapping): The reported overlap statistics and classification of relationships as 'non-obvious' lack sensitivity analysis to alternative interpretations of 'commonality' or to different expert mappings; without this, the quantitative support for the headline findings remains vulnerable to post-hoc interpretation.
minor comments (2)
  1. [Abstract] Abstract: The forward reference 'as we will show' should be replaced with a direct statement of results to improve standalone readability.
  2. [SecLan model] SecLan model section: Provide a brief appendix or table listing the exact concepts extracted from the 66 DSLs to make the derivation more transparent.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the thoughtful and constructive comments on our manuscript. We address each of the major comments below and outline the revisions we will make to improve the clarity, reproducibility, and robustness of our study.

read point-by-point responses
  1. Referee: [§3] §3 (Data Collection): The selection criteria, search protocol, and inclusion/exclusion rules for the 66 DSLs and 36 analyzers are not documented with sufficient detail or reproducibility (e.g., no explicit coverage of GitHub, academic venues, or commercial tools). This is load-bearing for the central claim of 'few commonalities' because the observed sparsity in mappings and 'many non-obvious potential relationships' could be an artifact of convenience sampling rather than representative of the field.

    Authors: We agree that the documentation of our data collection process requires more detail to ensure reproducibility and to allow assessment of potential sampling biases. In the revised version of the manuscript, we will expand §3 to include a comprehensive description of the search protocol, including the specific sources searched (academic venues, GitHub, commercial tool repositories), the search terms used, and the explicit inclusion and exclusion criteria applied to select the 66 security design DSLs and 36 code analyzers. We will also provide a table or list summarizing the selected items and the rationale for inclusion where relevant. This revision will strengthen the foundation for our claims regarding the limited commonalities observed. revision: yes

  2. Referee: [§5] §5 (Quantitative Mapping): The reported overlap statistics and classification of relationships as 'non-obvious' lack sensitivity analysis to alternative interpretations of 'commonality' or to different expert mappings; without this, the quantitative support for the headline findings remains vulnerable to post-hoc interpretation.

    Authors: We acknowledge the value of sensitivity analysis for validating the quantitative results. In the revised manuscript, we will add a sensitivity analysis subsection in §5. This will include testing the overlap statistics under alternative interpretations of commonality (such as requiring exact concept matches versus semantic similarity) and examining how the classification of 'non-obvious' relationships varies with different groupings or subsets of the expert mappings from the 9 experts consulted. We will report the range of outcomes to demonstrate the robustness of our headline findings on the sparsity of mappings and the challenges posed by overly general weakness descriptions. revision: yes

Circularity Check

0 steps flagged

No circularity: empirical mapping of external DSLs and analyzers

full rationale

The paper performs a survey-based empirical study: it collects 66 existing security design DSLs and 559 checks from 36 code analyzers, extracts common concepts into the SecLan model, validates the model with 22 experts, and then performs quantitative mapping plus qualitative exploration with 9 experts. No derivation chain, equations, fitted parameters, or predictions are present that could reduce to the authors' own inputs or self-citations. The central claims rest on external artifacts and independent expert validation rather than self-referential definitions or load-bearing prior work by the same authors. Selection bias concerns affect generalizability but do not create circularity in the reported analysis.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 1 invented entities

The central claims rest on the representativeness of the chosen DSLs/analyzers and the reliability of expert judgments for validation and qualitative insights. SecLan is introduced as an organizing model rather than a predictive entity.

axioms (2)
  • domain assumption The 66 design-level security DSLs and 36 code-level analyzers (yielding 559 checks) are representative of the broader security design and analysis landscape.
    Findings on commonalities and complexity are generalized from this sample without explicit justification of coverage or sampling method.
  • domain assumption Validation and exploration by 22 and 9 security experts sufficiently captures the accuracy of the SecLan model and the nature of DSL-analyzer relationships.
    Expert input is used to validate the model and explore qualitative aspects; no independent verification of expert consensus is described.
invented entities (1)
  • SecLan model no independent evidence
    purpose: To capture and organize common security concepts shared between design DSLs and code analyzers.
    Serves as the central artifact for mapping and validation; validated by 22 experts but does not introduce falsifiable predictions beyond the study itself.

pith-pipeline@v0.9.0 · 5535 in / 1509 out tokens · 64079 ms · 2026-05-11T02:48:30.912050+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

219 extracted references · 219 canonical work pages

  1. [1]

    Jenny Abramov, Arnon Sturm, and Peretz Shoval. 2012. Evaluation of the Pattern-based method for Secure De- velopment (PbSD): A controlled experiment. Information and Software Technology (IST) 54, 9 (2012), 1029–1043. 44 Peldszus et al. https://doi.org/10.1016/J.INFSOF.2012.04.001

  2. [2]

    Gail-Joon Ahn, Seung-Phil Hong, and Michael E Shin. 2002. Reconstructing a formal security model. Information and Software Technology (IST) (2002)

  3. [3]

    Gail-Joon Ahn and Hongxin Hu. 2007. Towards realizing a formal RBAC model in real systems. In Symposium on Access Control Models and Technologies

  4. [4]

    Cartesian vs. Radial – A Comparative Evaluation of Two Visualization Tools

    Wolfgang Ahrendt, Bernhard Beckert, Richard Bubel, Reiner Hähnle, Peter H. Schmitt, and Mattias Ulbrich. 2016. Deductive Software Verification – The KeY Book– From Theory to Practice. Springer. https://doi.org/10.1007/978- 3-319-49812-6

  5. [5]

    Masoom Alam, Michael Hafner, and Ruth Breu. 2007. Model-Driven Security Engineering for Trust Management in SECTET. Journal of Software (2007)

  6. [6]

    Hamda Hasan AlBreiki and Qusay H. Mahmoud. 2014. Evaluation of static analysis tools for software secu- rity. In International Conference on Innovations in Information Technology (IIT). 93–98. https://doi.org/10.1109/ INNOVATIONS.2014.6987569

  7. [7]

    Allen, Sean Barnum, Robert J

    Julia H. Allen, Sean Barnum, Robert J. Ellison, Gary McGraw, and Nancy R. Mead. 2008.Software Security Engineering. Pearson Education

  8. [8]

    José Bacelar Almeida, Manuel Barbosa, Gilles Barthe, Arthur Blot, Benjamin Grégoire, Vincent Laporte, Tiago Oliveira, Hugo Pacheco, Benedikt Schmidt, and Pierre-Yves Strub. 2017. Jasmin: High-Assurance and High-Speed Cryptography. In Conference on Computer and Communications Security (CCS). 1807–1823. https://doi.org/10. 1145/3133956.3134078

  9. [9]

    Mohamed Almorsy and John Grundy. 2014. SecDSVL: A Domain-Specific Visual Language to Support Enterprise Security Modelling. In Australian Software Engineering Conference

  10. [10]

    Mohamed Almorsy, John Grundy, and Amani S. Ibrahim. 2013. Automated software architecture security risk analysis using formalized signatures. In International Conference on Software Engineering (ICSE). 662–671. https: //doi.org/10.1109/ICSE.2013.6606612

  11. [11]

    Fernandez

    Abdulrahman Alnaim, Ahmed Alwakeel, and Eduardo B. Fernandez. 2019. A Misuse Pattern for Compromising VMs via Virtual Machine Escape in NFV. In Proceedings of the 14th International Conference on Availability, Reliability and Security (Canterbury, CA, United Kingdom) (ARES ’19). Association for Computing Machinery, New York, NY, USA, Article 77, 6 pages. ...

  12. [12]

    Ludovic Apvrille and Yves Roudier. 2013. SysML-Sec: A SysML Environment for the Design and Development of Secure Embedded Systems. In Asia-Pacific Council on Systems Engineering Conference (APCOSEC)

  13. [13]

    Oluwasefunmi Tale Arogundade, Olutimi Onilede, Sanjay Misra, Olusola Abayomi-Alli, Modupe Odusami, and Jonathan Oluranti. 2021. From Modeling to Code Generation: An Enhanced and Integrated Approach. In Innovations in Information and Communication Technologies (IICT). 421–427. https://doi.org/10.1007/978-3-030-66218-9_50

  14. [14]

    Philippe Arteau. 2022. Find Security Bugs. https://find-sec-bugs.github.io/ last accessed February 10 th, 2025

  15. [15]

    Steven Arzt, Siegfried Rasthofer, Christian Fritz, Eric Bodden, Alexandre Bartel, Jacques Klein, Yves Le Traon, Damien Octeau, and Patrick McDaniel. 2014. FlowDroid: Precise Context, Flow, Field, Object-Sensitive and Lifecycle-Aware Taint Analysis for Android Apps. In Conference on Programming Language Design and Implementation (PLDI). 259–269. https://do...

  16. [16]

    Dejan Baca, Kai Petersen, Bengt Carlsson, and Lars Lundberg. 2009. Static Code Analysis to Detect Software Security Vulnerabilities - Does Experience Matter?. In International Conference on Availability, Reliability and Security (ARES). 804–810. https://doi.org/10.1109/ARES.2009.163

  17. [17]

    Chase Baker and Michael Shin. 2012. Mapping of Security Concerns in Design to Security Aspects in Code. In International Conference on Software Security and Reliability (SERE). 102–110. https://doi.org/10.1109/SERE-C. 2012.20

  18. [18]

    Musard Balliu, Daniel Schoepe, and Andrei Sabelfeld. 2017. We Are Family: Relating Information-Flow Trackers. In European Symposium on Research in Computer Security (ESORICS). 124–145

  19. [19]

    Djonathan Barros, Sven Peldszus, Wesley K. G. Assunção, and Thorsten Berger. 2022. Editing Support for Software Languages: Implementation Practices in Language Server Protocols. In International Conference on Model Driven Engineering Languages and Systems (MODELS). 232–243. https://doi.org/10.1145/3550355.3552452

  20. [20]

    Len Bass, Paul Clements, and Rick Kazman. 2003. Software Architecture In Practice. Addison-Wesley Longman

  21. [21]

    Benoit Baudry, Franck Fleurey, Tejeddine Mouelhi, and Yves Le Traon. 2008. A Model-Based Framework for Security Policy Specification, Deployment and Testing. InInternational Conference on Model Driven Engineering Languages and Systems (MODELS)

  22. [22]

    Thorsten Berger, Rolf-Helge Pfeiffer, Reinhard Tartler, Steffen Dienst, Krzysztof Czarnecki, Andrzej Wasowski, and Steven She. 2014. Variability Mechanisms in Software Ecosystems. Information and Software Technology (IST) 56, 11 (2014), 1520–1535. https://doi.org/10.1016/j.infsof.2014.05.005

  23. [23]

    Lorenzo Bettini. 2016. Implementing DSLs with Xtext and Xtend (2 ed.). Packt Publishing. Can I Check What I Designed? Mapping Security Design DSLs to Code Analyzers 45

  24. [24]

    Simon Bischof. 2022. JOANA (Java Object-sensitive ANAlysis) Project Page. https://pp.ipd.kit.edu/projects/joana/ last accessed February 10th, 2025

  25. [25]

    Julien Brunel, David Chemouil, Laurent Rioux, Mohamed Bakkali, and Frederique Vallee. 2014. A Viewpoint- Based Approach for Formal Safety & Security Assessment of System Architectures. In Workshop on Model-Driven Engineering, Verification and Validation

  26. [26]

    Burt, Barrett R

    Carol C. Burt, Barrett R. Bryant, Rajeev R. Raje, Andrew Olson, and Mikhail Auguston. 2003. Model Driven Security: Unification of Authorization Models for Fine-Grain Access Control. In International Enterprise Distributed Object Computing

  27. [27]

    Marianne Busch, Nora Koch, and Santiago Suppan. 2014. Modeling Security Features of Web Applications

  28. [28]

    Marianne Busch and Martin Wirsing. 2015. An Ontology for Secure Web Applications. International Journal of Software and Informatics 9, 2 (2015), 233–258

  29. [29]

    Cristiano Calcagno, Dino Distefano, Jeremy Dubreil, Dominik Gabi, Pieter Hooimeijer, Martino Luca, Peter O’Hearn, Irene Papakonstantinou, Jim Purbrick, and Dulma Rodriguez. 2015. Moving Fast with Software Verification. InNASA Formal Methods, Klaus Havelund, Gerard Holzmann, and Rajeev Joshi (Eds.). Springer International Publishing, Cham, 3–11

  30. [30]

    Brian Chess and Jacob West. 2007. Secure Programming with Static Analysis. Pearson Education

  31. [31]

    Manuel Clavel, Viviane Torres da Silva, Christiano Braga, and Marina Egea. 2008. Model-Driven Security in Practice: An Industrial Experience. In European Conference on Model Driven Architecture - Foundations and Applications (CMDA-FA). 326–337. https://doi.org/10.1007/978-3-540-69100-6_{2}{2}

  32. [32]

    Common Weakness Enumeration. 2006. CWE-200: Exposure of Sensitive Information to an Unauthorized Actor. https://cwe.mitre.org/data/definitions/200.html

  33. [33]

    Crispan Cowan, Calton Pu, Dave Maier, Jonathan Walpole, Peat Bakke, Steve Beattie, Aaron Grier, Perry Wagle, Qian Zhang, and Heather Hinton. 1998. StackGuard: Automatic Adaptive Detection and Prevention of Buffer-Overflow Attacks. In USENIX Security Symposium

  34. [34]

    Lirong Dai and Kendra Cooper. 2006. Modeling and performance analysis for security aspects. Science of Computer Programming 61, 1 (2006), 58–71

  35. [35]

    Paloma Diaz, Ignacio Aedo, Daniel Sanz, and Alessio Malizia. 2008. A model-driven approach for the visual specifica- tion of Role-Based Access Control policies in web systems. In Symposium on Visual Languages and Human-Centric Computing

  36. [36]

    Seacord, David Svoboda, and Kazuya Togashi

    Chad Dougherty, Kirk Sayre, Robert C. Seacord, David Svoboda, and Kazuya Togashi. 2009. Secure Design Patterns. Technical Report CMU/SEI-2009-TR-010. CERT Program

  37. [37]

    Jürgen Ebert and Daniel Bildhauer. 2010. Reverse Engineering Using Graph Queries. In Graph Transformations and Model-Driven Engineering – Essays Dedicated to Manfred Nagl on the Occasion of his 65th Birthday. Springer, 335–362. https://doi.org/10.1007/978-3-642-17322-6_{1}{5}

  38. [38]

    Matthew Eby, Jan Werner, Gabor Karsai, and Akos Ledeczi. 2007. Integrating Security Modeling into Embedded System Design. In International Conference and Workshops on the Engineering of Computer-Based Systems (ECBS)

  39. [39]

    Eclipse Foundation. 2024. CogniCrypt Website. https://eclipse.dev/cognicrypt/ last accessed February 10 th, 2025

  40. [40]

    Anne Edmundson, Brian Holtkamp, Emanuel Rivera, Matthew Finifter, Adrian Mettler, and David A. Wagner. 2013. An Empirical Study on the Effectiveness of Security Code Review. In International Symposium on Engineering Secure Software and Systems (ESSoS). 197–212. https://doi.org/10.1007/978-3-642-36563-8_{1}{4}

  41. [41]

    Manuel Egele, David Brumley, Yanick Fratantonio, and Christopher Kruegel. 2013. An empirical study of cryptographic misuse in android applications. In Conference on Computer & Communications Security (CCS)

  42. [42]

    Yehia Elrakaiby, Moussa Amrani, and Yves Le Traon. 2014. Security@Runtime: A Flexible MDE Approach to Enforce Fine-grained Security Policies. In International Symposium on Engineering Secure Software and Systems (ESSoS)

  43. [43]

    Sascha Fahl, Marian Harbach, Thomas Muders, Lars Baumgärtner, Bernd Freisleben, and Matthew Smith. 2012. Why eve and mallory love android: an analysis of android SSL (in)security. In Conference on Computer and Communications Security (CCS)

  44. [44]

    Eduardo Fernandez-Medina, Juan Trujillo, Rodolfo Villarroel, and Mario Piattini. 2007. Developing secure data warehouses with a UML extension. Information Systems 32 (2007), 826–856. Issue 6

  45. [45]

    Torsten Fink, Manuel Koch, and Karl Pauls. 2006. An MDA approach to Access Control Specifications Using MOF and UML Profiles. In Electronic Notes in Theoretical Computer Science

  46. [46]

    Jyoti Gajrani, Meenakshi Tripathi, Vijay Laxmi, Gaurav Somani, Akka Zemmari, and Manoj Singh Gaur. 2020. Vulvet: Vetting of Vulnerabilities in Android Apps to Thwart Exploitation.Digital Threats: Research and Practice 1, 2 (2020)

  47. [47]

    Johannes Geismann, Bastian Haverkamp, and Eric Bodden. 2021. Ensuring Threat-Model Assumptions by Using Static Codeanalyses. In International Workshop on Model-Driven Engineering for Software Architecture

  48. [48]

    Geri Georg, Kyriakos Anastasakis, Behzad Bordbar, Siv Hilde Houmb, Indrakshi Ray, and Manachai Toahchoodee

  49. [49]

    Transactions on Software 46 Peldszus et al

    Verification and Trade-Off Analysis of Security Properties in UML System Models. Transactions on Software 46 Peldszus et al. Engineering (TSE) (2010)

  50. [50]

    Christopher Gerking and David Schubert. 2019. Component-Based Refinement and Verification of Information- Flow Security Policies for Cyber-Physical Microservice Architectures. In International Conference on Software Architecture (ICSA)

  51. [51]

    Christopher Gerking, David Schubert, and Eric Bodden. 2018. Model Checking the Information Flow Security of Real-Time Systems. In Engineering Secure Software and Systems (ESSoS)

  52. [52]

    Dennis Giffhorn and Christian Hammer. 2008. Precise Analysis of Java Programs Using JOANA. In International Working Conference on Source Code Analysis and Manipulation (SCAM). 267–268. https://doi.org/10.1109/SCAM. 2008.17

  53. [53]

    Massimiliano Giordano, Giuseppe Polese, Giuseppe Scanniello, and Genoveffa Tortora. 2010. A system for visual role-based policy modelling. Journal of Visual Languages & Computing (2010)

  54. [54]

    GitHub. 2024. CodeQL Website. https://codeql.github.com/ last accessed February 10 th, 2025

  55. [55]

    GitHub. 2025. Dependabot

  56. [56]

    Hassan Gomaa and Michael Eonsuk Shin. 2006. Software Requirements and Architecture Modeling for Evolving Non-secure Applications into Secure Applications. Science of Computer Programming 66 (2006), 60–70. Issue 1

  57. [57]

    Google. 2024. ErrorProne Website. https://errorprone.info/ last accessed February 10 th, 2025

  58. [58]

    Simon Greiner. 2018. A Framework for Non-Interference in Component-Based Systems. Ph. D. Dissertation. Karl- sruher Institut für Technologie (KIT). https://doi.org/10.5445/IR/1000082042

  59. [59]

    Simon Greiner, Martin Mohr, and Bernhard Beckert. 2017. Modular Verification of Information Flow Security in Component-Based Systems. In International Conference on Software Engineering and Formal Methods (SEFM). 300–315. https://doi.org/10.1007/978-3-319-66197-1_19

  60. [60]

    Linda Ariani Gunawan, Peter Herrmann, and Frank Alexander Kraemer. 2009. Towards the Integration of Security Aspects into System Development Using Collaboration-Oriented Models. In International Conference on Security Technology

  61. [61]

    Linda Ariani Gunawan, Frank Alexander Kraemer, and Peter Herrmann. 2011. A Tool-Supported Method for the Design and Implementation of Secure Distributed Applications. In International Symposium on Engineering Secure Software and Systems (ESSoS)

  62. [62]

    Sebastian Hahner. 2024. Architecture-Based and Uncertainty-Aware Confidentiality Analysis. Dissertation. Karl- sruhe Institute of Technology (KIT), Karlsruhe, Germany

  63. [63]

    Stolee, and Christopher Parnin

    Sarah Heckman, Kathryn T. Stolee, and Christopher Parnin. 2018. 10+ Years of Teaching Software Engineering with iTrust: the Good, the Bad, and the Ugly. InInternational Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET). 1–4. https://doi.org/10.1145/3183377.3183393

  64. [64]

    Rogardt Heldal and Fredrik Hultin. 2003. Bridging Model-Based and Language-Based Security. In European Symposium on Research in Computer Security (ESORICS)

  65. [65]

    Kevin Hermann, Sven Peldszus, Jan-Philipp Steghöfer, and Thorsten Berger. 2025. An Exploratory Study on the Engineering of Security Features. In International Conference on Software Engineering (ICSE). 2470–2482. https: //doi.org/10.1109/ICSE55347.2025.00184

  66. [66]

    Angela Sasse, and Alena Naiakshina

    Kevin Hermann, Simon Schneider, Catherine Tony, Asli Yardim, Sven Peldszus, Thorsten Berger, Riccardo Scandariato, M. Angela Sasse, and Alena Naiakshina. 2025. A Taxonomy of Functional Security Features and How They Can Be Located. Empirical Software Engineering (EMSE) 30, 117 (2025). https://doi.org/10.1007/s10664-025-10649-7

  67. [67]

    Almut Herzog, Nahid Shahmehri, and Claudiu Duma. 2007. An Ontology of Information Security. International Journal of Information Security and Privacy (IJISP) 1, 4 (2007), 1–23. https://doi.org/10.4018/jisp.2007100101

  68. [68]

    Bernhard Hoisl, Stefan Sobernig, and Mark Strembeck. 2012. Modeling and enforcing secure object flows in process- driven SOAs: an integrated model-driven approach. Software & Systems Modeling (2012)

  69. [69]

    Jose-Miguel Horcas, Mónica Pinto, and Lidia Fuentes. 2015. An Automatic Process for Weaving Functional Quality Attributes Using a Software Product Line Approach. Journal of Systems and Software 112 (2015)

  70. [70]

    Jose-Miguel Horcas, Mónica Pinto, Lidia Fuentes, Wissam Mallouli, and Edgardo Montes de Oca. 2016. An approach for deploying and monitoring dynamic security policies. Computers & Security (2016)

  71. [71]

    Katherine Hough and Jonathan Bell. 2022. A Practical Approach for Dynamic Taint Tracking with Control-flow Relationships. Transactions Software Engineering and Methodology (TOSEM) 31, 2 (2022), 26:1–26:43. https: //doi.org/10.1145/3485464

  72. [72]

    Yih-Chun Hu, Adrian Perrig, and David B. Johnson. 2003. Packet Leashes: A Defense Against Wormhole Attacks in Wireless Networks. In Joint Conference of the IEEE Computer and Communications Societies (INFOCOM). 1976–

  73. [73]

    https://doi.org/10.1109/INFCOM.2003.1209219

  74. [74]

    Akram Idani and Yves Ledru. 2015. B for Modeling Secure Information Systems – The B4MSecure Platform. In International Conference on Formal Engineering Methods (ICFEM). 312–318. https://doi.org/10.1007/978-3-319- 25423-4_20 Can I Check What I Designed? Mapping Security Design DSLs to Code Analyzers 47

  75. [75]

    Nasif Imtiaz, Seaver Thorn, and Laurie Williams. 2021. A Comparative Study of Vulnerability Reporting by Software Composition Analysis Tools. In International Symposium on Empirical Software Engineering and Measurement (ESEM). https://doi.org/10.1145/3475716.3475769

  76. [76]

    ISO/IEC JTC 1/SC 27. 2009. Common Criteria for Information Technology Security Evaluation. International Stan- dard ISO/IEC 15408. International Organization for Standardization (ISO)

  77. [77]

    ISO/IEC/IEEE. 2011. Systems and Software Engineering – Architecture Description. Standard 42010:2011. https: //doi.org/10.1109/IEEESTD.2011.6129467

  78. [78]

    ISO/SAE 21434:2021. 2021. Road vehicles — Cybersecurity engineering . International Standard ISO/SAE 21434. International Organization for Standardization (ISO)

  79. [79]

    Henner Jakob, Nicolas Loriant, and Charles Consel. 2009. An aspect-oriented approach to securing distributed systems. In International conference on Pervasive services (ICPS)

  80. [80]

    Paul Jansen. 2025. TIOBE Index for January 2025. https://www.tiobe.com/tiobe-index/ last accessed February 10 th, 2025

Showing first 80 references.