pith. machine review for the scientific record. sign in

arxiv: 2605.13100 · v1 · submitted 2026-05-13 · 💻 cs.CR · cs.SE

Recognition: no theorem link

Security Incentivization: An Empirical Study of how Micropayments Impact Code Security

Authors on Pith no claims yet

Pith reviewed 2026-05-14 19:02 UTC · model grok-4.3

classification 💻 cs.CR cs.SE
keywords security incentivescode securitystatic analysisempirical studyissue densitystudent teamsbeta regressionmicropayments
0
0 comments X

The pith

Tying team bonuses to improvements in security scanner results reduces code issue density.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Security work often loses out because it does not produce visible short-term gains. This study tests whether direct incentives can shift that balance by rewarding teams for measurable progress on automated security checks. In a controlled experiment, 84 students in 14 teams worked on projects under two conditions that differed only in whether bonus points were linked to better scanner scores over successive sprints. The group receiving the incentives finished with reliably lower security issue density than the control group. The difference held after accounting for code volume and appeared stronger in back-end components than in front-end ones.

Core claim

Teams that received bonus points scaled to their relative reduction in security issue density across sprints produced code with significantly lower issue density overall. Beta regression gave a coefficient of -0.396 with p = 0.0342. The improvement was larger for back-end code than front-end code, and the effect was not explained by simply writing more lines of code, since both groups increased in size at similar rates. The measurement relied on a repeatable pipeline that aggregated results from three static analysis tools and computed issue density per sprint.

What carries the argument

Security issue density, calculated as the number of static-analysis findings divided by lines of code, together with the relative improvement ratio between consecutive sprints that determines the incentive payout.

If this is right

  • Security incentives can be automated and scaled using existing static-analysis tools without manual audits.
  • Incentive effects vary by code layer, so back-end and front-end components may need separate targets.
  • The gains are not an artifact of increased code volume, preserving the meaning of the density metric.
  • The same pipeline supports repeated measurement and reporting across many teams or projects.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Professional teams might adopt similar structures if the metrics are adjusted for their tooling and risk profile.
  • Longer projects could test whether the security gains continue after the incentive period ends.
  • Pairing security density with other quality metrics could prevent teams from neglecting non-security concerns.

Load-bearing premise

The chosen static analysis tools detect security problems that actually matter and that student behavior under incentives will resemble what professional teams would do.

What would settle it

A replication study with professional developers that applies the same incentive structure and finds no reduction in security issues or real-world vulnerabilities would show the result does not hold.

Figures

Figures reproduced from arXiv: 2605.13100 by Alexander Lercher, Christoph Wedenig, Fabian Oraze, Georg Sengstbratl, Johann Glock, Martin Pinzger, Rainer W. Alexandrowicz, Stefan Rass.

Figure 1
Figure 1. Figure 1: Security Incentivization Procedure 3.1 Security Incentivization Mechanism We propose to measure the code’s quality in terms of security, and reward a developer for changes of the quality towards better security [PITH_FULL_IMAGE:figures/full_fig_p005_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Security improvement cycle with automated scanning and developer feedback. After [PITH_FULL_IMAGE:figures/full_fig_p008_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Course synopsis provided to the security-incentivized group (SEC). The control group [PITH_FULL_IMAGE:figures/full_fig_p011_3.png] view at source ↗
Figure 5
Figure 5. Figure 5: LOC by group, what, and sprint. Notes: B = back-end, F = front-end; CON = control [PITH_FULL_IMAGE:figures/full_fig_p016_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Security issues by group, what, and sprint. Notes: B = back-end, F = front-end; CON [PITH_FULL_IMAGE:figures/full_fig_p017_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Security issue density by group and sprint. Notes: B = back-end, F = front-end; CON [PITH_FULL_IMAGE:figures/full_fig_p018_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: ∆Q by group and layer (back-end, front-end) [PITH_FULL_IMAGE:figures/full_fig_p020_8.png] view at source ↗
read the original abstract

Security often receives insufficient developer attention because it does not directly generate visible value, leading to underinvestment in practice. We evaluate a countermeasure by team-level incentives tied to measurable security improvements over time. Our semi-automated mechanism aggregates static analysis findings from Bearer, Detekt, and mobsfscan, computes security issue density, and rewards teams based on the relative improvement ratio across sprints, enabling repeatable, scriptable reporting at scale. In a controlled course experiment with 84 students across 14 teams, we compared a security-incentivized condition, in which bonus points were linked to security scanner results, against a control condition with an otherwise identical grading scheme. The treatment group achieved significantly lower security issue density overall (beta regression: $\beta = -0.396, p = 0.0342$), indicating improved measurable security under incentivization. After controlling for platform, we observed a marked front-end/back-end disparity, with back-ends showing fewer issues and higher improvement ratios under incentives, highlighting heterogeneous effects across stack layers. Notably, these gains were not the byproduct of inflated code volume, as lines of code increased similarly across groups over time. The measurement pipeline and toolchain proved feasible for scripting and automation, supporting scalable adoption in practice. Our results suggest that aligning rewards with automated security metrics can measurably improve code security and merit follow-up in professional contexts and longer development lifecycles.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper claims that team-level incentives based on reductions in security issue density (aggregated from Bearer, Detekt, and mobsfscan static analysis outputs) produce significantly lower issue density in a controlled student experiment (beta regression β = -0.396, p = 0.0342), with heterogeneous effects by stack layer and no confounding increase in code volume; the automated pipeline is presented as scalable for practice.

Significance. If the proxy validity holds, the result would demonstrate that aligning rewards with automated security metrics can drive measurable improvements, offering a practical, scriptable approach to address underinvestment in security with potential for professional adoption and longer-term studies.

major comments (2)
  1. [Results (beta regression) and Methods (issue density computation)] The central claim that incentives improve 'measurable security' rests on the beta regression result, but the manuscript provides no validation (e.g., manual review of flagged issues or mapping to CVEs) that reductions in scanner output reflect genuine risk reduction rather than avoidance of detectable patterns; this is load-bearing because the experiment directly incentivizes the proxy metric itself.
  2. [Experimental design and Results] The experiment uses a student sample (84 participants, 14 teams) without reported controls or discussion of how team differences, prior experience, or motivation might confound the treatment effect; this limits the strength of the causal inference for the reported β coefficient.
minor comments (1)
  1. [Methods] The exact formula for security issue density, any data exclusions, and full regression controls are not detailed in the provided abstract or summary; adding these explicitly would improve reproducibility.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback. We address each major comment below, clarifying the scope of our proxy-based study and proposing targeted revisions to the manuscript.

read point-by-point responses
  1. Referee: The central claim that incentives improve 'measurable security' rests on the beta regression result, but the manuscript provides no validation (e.g., manual review of flagged issues or mapping to CVEs) that reductions in scanner output reflect genuine risk reduction rather than avoidance of detectable patterns; this is load-bearing because the experiment directly incentivizes the proxy metric itself.

    Authors: We agree that the static analysis outputs from Bearer, Detekt, and mobsfscan constitute a proxy rather than a direct measure of security risk. The experiment was designed to test whether team-level incentives tied to this automated, scalable metric produce measurable reductions in issue density; it does not claim to demonstrate reductions in actual vulnerabilities or CVEs. No manual review or CVE mapping was performed, as the study scope was limited to a single-semester course experiment with 84 students. We will revise the abstract, results, and discussion sections to explicitly frame all claims around 'scanner-detected issue density' and add a limitations paragraph acknowledging the proxy nature and the risk of metric gaming. This revision clarifies the contribution without overstating generalizability to unmeasured risk. revision: partial

  2. Referee: The experiment uses a student sample (84 participants, 14 teams) without reported controls or discussion of how team differences, prior experience, or motivation might confound the treatment effect; this limits the strength of the causal inference for the reported β coefficient.

    Authors: Teams were randomly assigned to the incentivized and control conditions within the same course to reduce selection effects, and the beta regression controlled for platform (front-end vs. back-end) as reported. Individual-level covariates for prior security experience or baseline motivation were not collected, which is a genuine limitation of the student sample and restricts causal strength. We will expand the methods and limitations sections to describe the randomization procedure, note the absence of experience/motivation controls, and explicitly qualify the β = -0.396 result as preliminary evidence from a controlled educational setting that warrants replication in professional teams. revision: partial

Circularity Check

0 steps flagged

No significant circularity: purely empirical regression on observed scanner data

full rationale

The paper reports a controlled experiment with 84 students in 14 teams, using external static analysis tools (Bearer, Detekt, mobsfscan) to compute security issue density, then applies beta regression to compare treatment and control groups. The key result (β = -0.396, p = 0.0342) is a direct statistical output from the collected experimental measurements and does not reduce to any fitted parameter by construction, self-definition, or self-citation chain. No mathematical derivations, uniqueness theorems, or ansatzes are present; the analysis remains self-contained against the external tool outputs and the randomized team assignment.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The study is empirical and relies on standard statistical assumptions for beta regression rather than new theoretical constructs or invented entities.

axioms (1)
  • standard math Beta regression is appropriate for modeling bounded proportion data such as security issue density.
    Issue density is treated as a proportion between 0 and 1.

pith-pipeline@v0.9.0 · 5582 in / 1083 out tokens · 41215 ms · 2026-05-14T19:02:38.224927+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

56 extracted references · 8 canonical work pages

  1. [1]

    Vipindev Adat, Amrita Dahiya, and B. B. Gupta. Economic incentive based solution against distributed denial of service attacks for IoT customers. In2018 IEEE International Conference on Consumer Electronics (ICCE), pages 1–5, Las Vegas, NV, January 2018. IEEE

  2. [2]

    Aikido - Unified Security Platform from Code to Runtime, 2025

    Aikido Security BV. Aikido - Unified Security Platform from Code to Runtime, 2025

  3. [3]

    Anderson and T

    Ross J. Anderson and T. Moore. The economics of information security.Science, 314:610 – 613, 2006

  4. [4]

    Owura Asare, Meiyappan Nagappan, and N. Asokan. Is GitHub’s Copilot as bad as humans at introducing vulnerabilities in code?Empirical Softw. Engg., 28(6), September 2023. online: https://doi.org/10.1007/s10664-023-10380-1

  5. [5]

    Designing user incentives for cybersecurity

    Terrence August, Robert August, and Hyoduk Shin. Designing user incentives for cybersecurity. Communications of the ACM, 57(11):43–46, October 2014

  6. [6]

    Terrence August and Tunay I. Tunca. Network Software Security and User Incentives.Man- agement Science, 52(11):1703–1720, November 2006. Publisher: INFORMS

  7. [7]

    The Behavioral Economics of Why Executives Underinvest in Cyberse- curity.Harvard Business Review, January 2017

    Alex Blau. The Behavioral Economics of Why Executives Underinvest in Cyberse- curity.Harvard Business Review, January 2017. online:https://hbr.org/2017/ 06/the-behavioral-economics-of-why-executives-underinvest-in-cybersecurity, re- trieved: July 2024

  8. [8]

    CC Consortium.Common Criteria for Information Technology, 2018

  9. [9]

    Product safety: Liability, R&D, and signaling

    Andrew F Daughety and Jennifer F Reinganum. Product safety: Liability, R&D, and signaling. American Economic Review, 85(5):1187–1206, December 1995

  10. [10]

    Bounded Rationality in Choice Theory: A Survey

    Geoffroy De Clippel and Kareen Rozen. Bounded Rationality in Choice Theory: A Survey. Journal of Economic Literature, 62(3):995–1039, September 2024

  11. [11]

    Egon Dejonckheere, Stijn Verdonck, Joren Andries, Natalie R¨ ohrig, Maarten Piot, Ghijs Kilani, and Merijn Mestdagh. Real-time incentivizing survey completion with game-based rewards in experience sampling research may increase data quantity, but reduces data quality.Computers in Human Behavior, 160:108360, November 2024

  12. [12]

    Security for the Robot Operating System.Robotics and Autonomous Systems, 98:192–203, 2017

    Bernhard Dieber, Benjamin Breiling, Sebastian Taurer, Severin Kacianka, Stefan Rass, and Peter Schartner. Security for the Robot Operating System.Robotics and Autonomous Systems, 98:192–203, 2017

  13. [13]

    Michel J. G. van Eeten and Johannes M. Bauer. Economics of Malware: Security Decisions, Incentives and Externalities. Technical report, OECD, Paris, May 2008

  14. [14]

    European Comission. Horizontal cybersecurity requirements for products with digital elements and amending Regulations (EU) No 168/2013 and (EU) No 2019/1020 and Directive (EU) 2020/1828 (Cyber Resilience Act). Regulation (EU) 2024/2847 of the European Parliament and of the Council 2024/2847, October 2024. 27

  15. [15]

    Fanning and Laurence J

    Michael C. Fanning and Laurence J. Golding. Static Analysis Results Interchange Format (SARIF). Technical Report Version 2.1.0 Plus Errata 01, Oasis Open, 2025

  16. [16]

    Risk Assessment Uncertainties in Cybersecurity Investments.Games, 9(2):34, June 2018

    Andrew Fielder, Sandra K¨ onig, Emmanouil Panaousis, Stefan Schauer, and Stefan Rass. Risk Assessment Uncertainties in Cybersecurity Investments.Games, 9(2):34, June 2018

  17. [17]

    Fulton, Daniel Votipka, Desiree Abrokwa, Michelle L

    Kelsey R. Fulton, Daniel Votipka, Desiree Abrokwa, Michelle L. Mazurek, Michael Hicks, and James Parker. Understanding the How and the Why: Exploring Secure Development Practices through a Course Competition. InProceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, CCS ’22, pages 1141–1155, New York, NY, USA,

  18. [18]

    online:https://doi.org/10.1145/3548606

    Association for Computing Machinery. online:https://doi.org/10.1145/3548606. 3560569, retrieved: July 2024

  19. [19]

    Rational Protocol Design: Cryptography against Incentive-Driven Adversaries

    Juan Garay, Jonathan Katz, Ueli Maurer, Bjorn Tackmann, and Vassilis Zikas. Rational Protocol Design: Cryptography against Incentive-Driven Adversaries. In2013 IEEE 54th Annual Symposium on Foundations of Computer Science, pages 648–657, Piscataway, NJ,

  20. [20]

    ISBN 978-0-7695-5135-7, DOI:10.1109/FOCS.2013.75

    IEEE. ISBN 978-0-7695-5135-7, DOI:10.1109/FOCS.2013.75

  21. [21]

    Gordon, Martin P

    Lawrence A. Gordon, Martin P. Loeb, W. Lucyshyn, and Lei Zhou. Empirical evidence on the determinants of cybersecurity investments in private sector firms.Journal of Information Security, 09:133–153, 2018

  22. [22]

    Dov Gordon and Jonathan Katz

    S. Dov Gordon and Jonathan Katz. Rational Secret Sharing, Revisited. In Roberto Prisco and Moti Yung, editors,Security and Cryptography for Networks.5th International Conference, SCN 2006, Maiori, Italy, September 6-8, 2006, Proceedings, volume 4116 ofLecture Notes in Computer Science, pages 229–241. Springer-Verlag GmbH, Berlin Heidelberg, 2006. DOI: htt...

  23. [23]

    AppSweep - GitHub Marketplace, 2025

    Guardsquare. AppSweep - GitHub Marketplace, 2025

  24. [24]

    Alex Halderman

    J. Alex Halderman. To Strengthen Security, Change Developers’ Incentives.IEEE Security & Privacy, 8(2):79–82, March 2010. Conference Name: IEEE Security & Privacy

  25. [25]

    AI-Powered Application Security Testing Platform|HCL AppScan, 2025

    HCLSoftware. AI-Powered Application Security Testing Platform|HCL AppScan, 2025

  26. [26]

    C. D. Huang, Ravi S. Behara, and Jahyun Goo. Optimal information security investment in a healthcare information exchange: An economic analysis.Decis. Support Syst., 61:1–11, 2014

  27. [27]

    RepChain: A Reputation-Based Secure, Fast, and High Incentive Blockchain System via Sharding.IEEE Internet of Things Journal, 8(6):4291–4304, March 2021

    Chenyu Huang, Zeyu Wang, Huangxun Chen, Qiwei Hu, Qian Zhang, Wei Wang, and Xia Guan. RepChain: A Reputation-Based Secure, Fast, and High Incentive Blockchain System via Sharding.IEEE Internet of Things Journal, 8(6):4291–4304, March 2021. Conference Name: IEEE Internet of Things Journal

  28. [28]

    International Organization for Standardization.ISO/IEC 27001 – Information technology – Security techniques – Information security management systems – Requirements, 2013

  29. [29]

    A systematic literature review on detecting software vulnerabilities with large language models, 2025

    Sabrina Kaniewski, Fabian Schmidt, Markus Enzweiler, Michael Menth, and Tobias Heer. A systematic literature review on detecting software vulnerabilities with large language models, 2025. 28

  30. [30]

    General Construc- tions of Rational Secret Sharing with Expected Constant-Round Reconstruction.The Com- puter Journal, 60(5):711–728, April 2017

    Akinori Kawachi, Yoshio Okamoto, Keisuke Tanaka, and Kenji Yasunaga. General Construc- tions of Rational Secret Sharing with Expected Constant-Round Reconstruction.The Com- puter Journal, 60(5):711–728, April 2017

  31. [31]

    When developer aid becomes security debt: A systematic analysis of insecure behaviors in llm coding agents, 2025

    Matous Kozak, Roshanak Zilouchian Moghaddam, and Siva Sivaraman. When developer aid becomes security debt: A systematic analysis of insecure behaviors in llm coding agents, 2025

  32. [32]

    The SDL Progress Report

    David Ladd, Frank Simorjay, Georgeo Pulikkathara, Jeff Jones, Matt Miller, Steve Lipner, and Tim Rains. The SDL Progress Report. Technical report, Microsoft, 2011

  33. [33]

    Lun Li, Jiqiang Liu, Lichen Cheng, Shuo Qiu, Wei Wang, Xiangliang Zhang, and Zonghua Zhang. CreditCoin: A Privacy-Preserving Blockchain-Based Incentive Announcement Net- work for Communications of Smart Vehicles.IEEE Transactions on Intelligent Transportation Systems, 19(7):2204–2220, July 2018

  34. [34]

    Incentive-based modeling and inference of attacker intent, objectives, and strategies.ACM Transactions on Information and System Security, 8(1):78–118, February 2005

    Peng Liu, Wanyu Zang, and Meng Yu. Incentive-based modeling and inference of attacker intent, objectives, and strategies.ACM Transactions on Information and System Security, 8(1):78–118, February 2005

  35. [35]

    Why Many Organizations Still Don’t Understand Security.GovTech, February 2019

    Daniel Lohrmann. Why Many Organizations Still Don’t Understand Security.GovTech, February 2019. online:https://www.govtech.com/blogs/lohrmann-on-cybersecurity/ why-many-organizations-still-dont-get-security.html, retrieved: July 2024

  36. [36]

    Xiyan Lv and Suyan Zhao. Research on Profit Distribution of Software Outsourcing Alliances Based on the Improved Shapley Value Model.Cybernetics and Information Technologies, 13(Special-Issue):100–109, December 2013

  37. [37]

    The potential of llm-generated reports in devsecops

    Nikolaos Lykousas, Vasileios Argyropoulos, and Fran Casino. The potential of llm-generated reports in devsecops. In Maria Virvou, Yoshinori Tanabe, and Lakhmi C. Jain, editors,Artifi- cial Intelligence-Empowered Software Engineering 2024, pages 431–442, Cham, 2025. Springer Nature Switzerland

  38. [38]

    7 reasons why development teams skip security steps |Invicti, January 2023

    Tomasz Andrzej Nidecki. 7 reasons why development teams skip security steps |Invicti, January 2023. online:https://www.invicti.com/blog/web-security/ 7-reasons-why-development-teams-skip-security-steps/, retrieved: July 2024

  39. [39]

    DeLong, Justin Cappos, Yuriy Brun, and Natalie C

    Daniela Seabra Oliveira, Tian Lin, Muhammad Sajidur Rahman, Rad Akefirad, Donovan Ellis, Eliany Perez, Rahul Bobhate, Lois A. DeLong, Justin Cappos, Yuriy Brun, and Natalie C. Ebner. API blindspots: why experienced developers write vulnerable code. InProceedings of the Fourteenth USENIX Conference on Usable Privacy and Security, SOUPS ’18, pages 315–328, ...

  40. [40]

    An Ethno- graphic Understanding of Software (In)Security and a Co-Creation Model to Improve Secure Software Development

    Hernan Palombo, Armin Ziaie Tabari, Daniel Lende, Jay Ligatti, and Xinming Ou. An Ethno- graphic Understanding of Software (In)Security and a Co-Creation Model to Improve Secure Software Development. InSixteenth Symposium on Usable Privacy and Security (SOUPS 2020), pages 205–220, Berkeley, 2020. USENIX Association. online:https://www.usenix. org/conferen...

  41. [41]

    Incentive-Based Software Security: Fair Micro-Payments for Writing Secure Code, September 2023

    Stefan Rass and Martin Pinzger. Incentive-Based Software Security: Fair Micro-Payments for Writing Secure Code, September 2023. arXiv:2309.05338 [cs]. 29

  42. [42]

    Raban, and Sheizaf Rafaeli

    Ganit Richter, Daphne R. Raban, and Sheizaf Rafaeli. Studying Gamification: The Effect of Rewards and Incentives on Motivation. In Torsten Reiners and Lincoln C. Wood, editors,Gam- ification in Education and Business, pages 21–46. Springer International Publishing, Cham, 2015

  43. [43]

    Roth, editor.The Shapley Value: Essays in Honor of Lloyd S

    Alvin E. Roth, editor.The Shapley Value: Essays in Honor of Lloyd S. Shapley. Cambridge University Press, 1 edition, October 1988

  44. [44]

    Semgrep|Homepage, 2025

    Semgrep, Inc. Semgrep|Homepage, 2025

  45. [45]

    L. S. Shapley. 17. A Value for n-Person Games. In Harold William Kuhn and Albert William Tucker, editors,Contributions to the Theory of Games (AM-28), Volume II, pages 307–318. Princeton University Press, December 1953

  46. [46]

    Repairbench: Leaderboard of frontier models for program repair

    Andr´ e Silva and Martin Monperrus. Repairbench: Leaderboard of frontier models for program repair. Technical Report 2409.18952, arXiv, 2024

  47. [47]

    Snyk AI Security Fabric|Secure Code, Models & Agents, 2025

    Snyk Limited. Snyk AI Security Fabric|Secure Code, Models & Agents, 2025

  48. [48]

    SonarSource/sonarqube, March 2025

    SonarSource. SonarSource/sonarqube, March 2025. original-date: 2011-01-05T11:05:17Z

  49. [49]

    Developments in Non-Expected Utility Theory: The Hunt for a Descriptive Theory of Choice under Risk.Journal of Economic Literature, 38(2):332–382, 2000

    Chris Starmer. Developments in Non-Expected Utility Theory: The Hunt for a Descriptive Theory of Choice under Risk.Journal of Economic Literature, 38(2):332–382, 2000

  50. [50]

    Staudacher and J

    J. Staudacher and J. Anwander.Using the R package CoopGame for the analysis, solution and visualization of cooperative games with transferable utility, 2019. R Vignette

  51. [51]

    Who is the Real Hero? Measuring Developer Contribution via Multi-dimensional Data Integration

    Yuqiang Sun, Zhengzi Xu, Chengwei Liu, Yiran Zhang, and Yang Liu. Who is the Real Hero? Measuring Developer Contribution via Multi-dimensional Data Integration. In2023 38th IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 825–836, September 2023. arXiv:2308.08991 [cs]

  52. [52]

    The Framing of Decisions and the Psychology of Choice

    Amos Tversky and Daniel Kahneman. The Framing of Decisions and the Psychology of Choice. In Vincent T. Covello, Jeryl L. Mumpower, Pieter J. M. Stallen, and V. R. R. Uppuluri, editors,Environmental Impact Assessment, Technology Assessment, and Risk Analysis, NATO ASI Series, Series G, pages 107–129. Springer, Berlin and Heidelberg, 1985. DOI:10.1007/ 978-...

  53. [53]

    Rational Choice and the Framing of Decisions

    Amos Tversky and Daniel Kahneman. Rational Choice and the Framing of Decisions. In Birsen Karpak and Stanley Zionts, editors,Multiple Criteria Decision Making and Risk Analysis Using Microcomputers, NATO ASI Series, Series F, pages 81–126. Springer, Berlin and Heidelberg,

  54. [54]

    DOI:10.1007/978-3-642-74919-3_4

  55. [55]

    Five Reasons Why Security Policies Don?t Get Im- plemented, January 2011

    Charles Cresson Wood. Five Reasons Why Security Policies Don?t Get Im- plemented, January 2011. online:https://informationshield.com/2011/01/11/ five-reasons-why-security-policies-dont-get-implemented/, retrieved: July 2024

  56. [56]

    An Improved Shapley Value Benefit Distribution Mechanism in Cooperative Game of Cyber Threat Intelligence Sharing

    Weiqiang Xie, Xiao Yu, Yuqing Zhang, and He Wang. An Improved Shapley Value Benefit Distribution Mechanism in Cooperative Game of Cyber Threat Intelligence Sharing. InIEEE INFOCOM 2020 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), pages 810–815, Toronto, ON, Canada, July 2020. IEEE. 30