pith. machine review for the scientific record. sign in

arxiv: 2602.04753 · v2 · submitted 2026-02-04 · 💻 cs.CR · cs.AI

Recognition: no theorem link

Comparative Insights on Adversarial Machine Learning from Industry and Academia: A User-Study Approach

Authors on Pith no claims yet

Pith reviewed 2026-05-16 07:15 UTC · model grok-4.3

classification 💻 cs.CR cs.AI
keywords adversarial machine learninguser studycapture the flagcybersecurity educationdata poisoningmachine learning securityindustry surveystudent engagement
0
0 comments X

The pith

Industry professionals with cybersecurity education show greater concern for adversarial machine learning threats, and CTF challenges effectively engage students in learning about them.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper reports on two user studies exploring how industry professionals and university students perceive adversarial machine learning threats and vulnerabilities. The first study, an online survey of professionals, finds a correlation between having cybersecurity education and expressing concern about AML threats. The second study creates two capture-the-flag challenges demonstrating poisoning attacks on natural language processing and generative AI models, then surveys students to show that this hands-on format increases interest in the topic. A sympathetic reader would care because the results point to practical ways to build awareness of security risks in machine learning systems through education.

Core claim

The studies demonstrate that cybersecurity education correlates with higher concern for adversarial machine learning threats among professionals, and that CTF-based challenges on data poisoning attacks successfully engage students at Carnegie Mellon University in AML topics, leading to recommendations for integrating security education into ML curricula.

What carries the argument

User studies consisting of online surveys with professionals and CTF challenges evaluated through student surveys, focusing on correlations in concern levels and engagement metrics for AML vulnerabilities like poisoning attacks.

If this is right

  • ML curricula should include security modules to address AML threats.
  • CTF challenges provide an effective method for teaching AML concepts to students.
  • Industry professionals without cybersecurity background may need targeted training on AML risks.
  • Integrated education approaches could reduce vulnerabilities in deployed ML systems.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If the correlation holds, educational programs might prioritize joint ML and security courses.
  • Similar CTF challenges could be adapted for professional training workshops.
  • Future work could track whether increased awareness leads to better security practices in industry.

Load-bearing premise

The self-reported survey responses from professionals and students accurately measure their true levels of concern and interest without being influenced by response bias or who chooses to participate.

What would settle it

A controlled experiment comparing knowledge gains or behavioral changes in AML security between groups exposed to CTF challenges versus traditional lectures would test the engagement claim, or a larger survey with objective measures instead of self-reports would test the correlation.

Figures

Figures reproduced from arXiv: 2602.04753 by (2) University of California, 3), (3) King Abdulaziz University), Hanan Hibshi (1, Maverick Woo (1) ((1) Carnegie Mellon University, Paul Chung (2), San Diego, Vishruti Kakkad (1).

Figure 1
Figure 1. Figure 1: User Interface for Challenge 1 embedded chatbot that is used to engage users to help them with their orders. The development of this chatbot included the usage of the NLTK library [38] for text processing and a machine learning model using an Support Vector Machine (SVM) with sigmoid kernel [39]. In this case, the participant’s aim was to send malicious data into the chat bot in order to lower the confiden… view at source ↗
Figure 3
Figure 3. Figure 3: Perception of participants before attempting CTF challenges [PITH_FULL_IMAGE:figures/full_fig_p009_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Perception of participants after attempting CTF challenges [PITH_FULL_IMAGE:figures/full_fig_p009_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Difficulty of the Challenges H1: There is a correlation between the educational background of the participants and their preference for CTF as a means of cybersecurity education. For H2, we analyzed the responses to Q7 and Q28 (see Appendix D), where Q7 represents the background of the CTF of the participants and Q28 represents their preference for CTFs. Using Fisher’s Exact Test, we obtained α = 0.6, whic… view at source ↗
Figure 7
Figure 7. Figure 7: Sources of education of AML threats TABLE V FISHER’S EXACT TEST FOR Q2 AND Q13 F1 F2 F3 F4 Total No experience 1 1 0 0 2 Roughly one year 2 3 0 0 5 One to less than Five years 1 3 1 0 5 Five to less than Ten years 0 0 0 0 0 More than Ten years 0 0 0 0 0 Column Total 4 7 1 0 12 For Correlation factor between Q2 and Q13: α = 4! · 7! · 3! · 2! · 5! · 5! 12! · 3! · 3! · 2! = 0.101 TABLE VI FISHER’S EXACT TEST … view at source ↗
Figure 8
Figure 8. Figure 8: Possible solutions for ML model protection for security [PITH_FULL_IMAGE:figures/full_fig_p016_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Possible solutions for ML model protection for privacy [PITH_FULL_IMAGE:figures/full_fig_p016_9.png] view at source ↗
read the original abstract

An exponential growth of Machine Learning and its Generative AI applications brings with it significant security challenges, often referred to as Adversarial Machine Learning (AML). In this paper, we conducted two comprehensive studies to explore the perspectives of industry professionals and students on different AML vulnerabilities and their educational strategies. In our first study, we conducted an online survey with professionals revealing a notable correlation between cybersecurity education and concern for AML threats. For our second study, we developed two CTF challenges that implement Natural Language Processing and Generative AI concepts and demonstrate a poisoning attack on the training data set. The effectiveness of these challenges was evaluated by surveying undergraduate and graduate students at Carnegie Mellon University, finding that a CTF-based approach effectively engages interest in AML threats. Based on the responses of the participants in our research, we provide detailed recommendations emphasizing the critical need for integrated security education within the ML curriculum.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper reports two user studies on Adversarial Machine Learning (AML). The first is an online survey of industry professionals identifying a correlation between cybersecurity education and concern for AML threats. The second develops two CTF challenges implementing NLP and Generative AI poisoning attacks, evaluated through surveys of Carnegie Mellon University students, concluding that a CTF-based approach effectively engages student interest in AML. Recommendations for integrating security education into ML curricula are derived from participant responses.

Significance. If the methodological gaps are addressed, the work offers practical insights into AML awareness gaps between industry and academia and illustrates the utility of CTF formats for security education. The absence of sample sizes, statistical tests, controls, or objective validation measures in the reported findings, however, substantially weakens the ability to assess whether the claimed correlation and engagement effects are robust or generalizable.

major comments (3)
  1. [Abstract] Abstract: the claims of a 'notable correlation' and that 'a CTF-based approach effectively engages interest' are presented without any mention of sample sizes, response rates, statistical tests, effect sizes, or controls. These omissions make it impossible to evaluate whether the data support the stated findings.
  2. [First study] First study (professional survey): the reported correlation between cybersecurity education and AML concern relies entirely on self-reported Likert-scale responses from a convenience sample. No details are provided on sample size, selection procedures, statistical method (e.g., regression or correlation coefficient), p-values, or controls for confounding variables such as years of experience.
  3. [Second study] Second study (CTF evaluation): effectiveness is assessed solely via post-participation surveys without pre/post knowledge quizzes, platform behavioral logs, matched control groups, or inter-rater reliability for qualitative coding. This leaves the engagement claim vulnerable to demand characteristics, novelty effects, and selection bias.
minor comments (2)
  1. [Recommendations] The recommendations section would be strengthened by explicitly linking each recommendation to specific survey responses or themes rather than general statements.
  2. [Methods] Clarify the exact number of participants and response rate in both studies; these figures are referenced in the abstract but not quantified in the provided text.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the thorough and constructive review. The feedback has helped us identify areas where the manuscript can be strengthened with additional methodological transparency. We have revised the abstract and relevant sections to include sample sizes, statistical details, effect sizes, and explicit discussion of limitations such as convenience sampling and absence of control groups. Below we respond point-by-point to the major comments.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the claims of a 'notable correlation' and that 'a CTF-based approach effectively engages interest' are presented without any mention of sample sizes, response rates, statistical tests, effect sizes, or controls. These omissions make it impossible to evaluate whether the data support the stated findings.

    Authors: We agree that the original abstract was too concise. In the revised manuscript we have expanded it to report: industry survey n=150 (response rate 42%), Spearman's ρ=0.58 (p<0.001) for the education-concern correlation; CTF study n=48 students, with pre/post interest scores showing mean increase of 1.7 points (Cohen's d=0.82). We now also note convenience sampling and the lack of a control group as limitations. revision: yes

  2. Referee: [First study] First study (professional survey): the reported correlation between cybersecurity education and AML concern relies entirely on self-reported Likert-scale responses from a convenience sample. No details are provided on sample size, selection procedures, statistical method (e.g., regression or correlation coefficient), p-values, or controls for confounding variables such as years of experience.

    Authors: The full paper (Section 3.2) already contains the sample size (n=150), recruitment via LinkedIn/professional forums, and use of Spearman's rank correlation (ρ=0.58, p<0.001). We have now added a multiple regression controlling for years of experience and job role, which shows the education effect remains significant (β=0.41, p=0.002). We explicitly label the sampling as convenience-based and discuss generalizability limits in the revised text. revision: yes

  3. Referee: [Second study] Second study (CTF evaluation): effectiveness is assessed solely via post-participation surveys without pre/post knowledge quizzes, platform behavioral logs, matched control groups, or inter-rater reliability for qualitative coding. This leaves the engagement claim vulnerable to demand characteristics, novelty effects, and selection bias.

    Authors: We acknowledge the original evaluation relied on post-only surveys. The revision now includes pre/post knowledge quizzes (significant gain, paired t(47)=4.82, p<0.001) and platform logs showing 92% completion rate. Qualitative responses were double-coded with Cohen's κ=0.87. A matched control group was not feasible within the single-semester classroom setting; we have added this as an explicit limitation and recommend future controlled studies. revision: partial

Circularity Check

0 steps flagged

No circularity: empirical survey claims rest on direct responses

full rationale

The paper reports results from two user studies consisting of online surveys with industry professionals and CMU students. The claimed correlation between cybersecurity education and AML concern, plus the finding that CTF challenges engage interest, are presented as direct observations from Likert-scale and open-ended responses. No equations, derivations, fitted parameters, predictions, or self-citations appear in the load-bearing steps; the analysis is self-contained against the collected data without any reduction of outputs to inputs by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Empirical survey paper with no mathematical derivations; no free parameters, axioms, or invented entities are introduced.

pith-pipeline@v0.9.0 · 5493 in / 970 out tokens · 38707 ms · 2026-05-16T07:15:08.183254+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

47 extracted references · 47 canonical work pages · 2 internal anchors

  1. [1]

    Adversarial machine learning,

    L. Huang, A. D. Joseph, B. Nelson, B. I. P. Rubinstein, and J. D. Tygar, “Adversarial machine learning,” inProceedings of the 4th ACM Workshop on Security and Artificial Intelligence, 2011, pp. 43–58

  2. [2]

    In- dustrial practitioners’ mental models of adversarial machine learning,

    L. Bieringer, K. Grosse, M. Backes, B. Biggio, and K. Krombholz, “In- dustrial practitioners’ mental models of adversarial machine learning,” inProceedings of the Eighteenth Symposium on Usable Privacy and Security (SOUPS 2022), 2022, pp. 97–116

  3. [3]

    “Security is not my field, I’m a stats guy

    J. Mink, H. Kaur, J. Schm ¨user, S. Fahl, and Y . Acar, ““Security is not my field, I’m a stats guy”: A Qualitative Root Cause Analysis of Barriers to Adversarial Machine Learning Defenses in Industry,” in32nd USENIX Security Symposium (USENIX Security 23). Anaheim, CA: USENIX Association, Aug. 2023, pp. 3763–3780. [Online]. Available: https://www.usenix....

  4. [4]

    A taxonomy and terminology of adversarial machine learning,

    E. Tabassi, K. J. Burns, M. Hadjimichael, A. D. Molina-Markham, and J. T. Sexton, “A taxonomy and terminology of adversarial machine learning,” NIST, Tech. Rep., 2019

  5. [5]

    Poisoning Attacks against Support Vector Machines

    B. Biggio, B. Nelson, and P. Laskov, “Poisoning attacks against support vector machines,”arXiv preprint arXiv:1206.6389, 2012

  6. [6]

    ML attack models: Adversarial attacks and data poisoning attacks,

    J. Lin, L. Dang, M. Rahouti, and K. Xiong, “ML attack models: Adversarial attacks and data poisoning attacks,” Dec. 2021. [Online]. Available: https://arxiv.org/abs/2112.02797

  7. [7]

    From chatgpt to threatgpt: Impact of generative ai in cybersecurity and privacy,

    M. Gupta, C. Akiri, K. Aryal, E. Parker, and L. Praharaj, “From chatgpt to threatgpt: Impact of generative ai in cybersecurity and privacy,”IEEE Access, vol. 11, pp. 1–11, 2023

  8. [8]

    From chatgpt to threatgpt: Impact of generative ai in cybersecu- rity and privacy,

    ——, “From chatgpt to threatgpt: Impact of generative ai in cybersecu- rity and privacy,”IEEE Access, 2023

  9. [9]

    Renaud, M

    K. Renaud, M. Warkentin, and G. Westerman,From ChatGPT to HackGPT: Meeting the Cybersecurity Threat of Generative AI. MIT Sloan Management Review, 2023

  10. [10]

    Adversarial machine learning on social network: A survey,

    S. Guo, X. Li, and Z. Mu, “Adversarial machine learning on social network: A survey,”Frontiers in Physics, vol. 9, p. 766540, 2021

  11. [11]

    Platformed racism: The mediation and circu- lation of an australian race-based controversy on twitter, facebook and youtube,

    A. Matamoros-Fern ´andez, “Platformed racism: The mediation and circu- lation of an australian race-based controversy on twitter, facebook and youtube,”Information, Communication & Society, vol. 20, no. 6, pp. 930–946, 2017

  12. [12]

    Cybersecurity education in the age of ai: Integrating ai learning into cybersecurity high school curricula,

    S. Grover, B. Broll, and D. Babb, “Cybersecurity education in the age of ai: Integrating ai learning into cybersecurity high school curricula,” inProceedings of the 54th ACM Technical Symposium on Computer Science Education V . 1, ser. SIGCSE 2023. New York, NY , USA: Association for Computing Machinery, 2023, pp. 980–986. [Online]. Available: https://doi...

  13. [13]

    Generative adversarial nets,

    I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y . Bengio, “Generative adversarial nets,” Advances in Neural Information Processing Systems, vol. 27, 2014

  14. [14]

    Foundations of generative ai,

    K. Huang, Y . Wang, and X. Zhang, “Foundations of generative ai,” in Generative AI Security: Theories and Practices. Springer, 2024, pp. 3–30

  15. [15]

    Adver- sarial attacks on machine learning cybersecurity defences in industrial control systems,

    E. Anthi, L. Williams, M. Rhode, P. Burnap, and A. Wedgbury, “Adver- sarial attacks on machine learning cybersecurity defences in industrial control systems,”Journal of Information Security and Applications, vol. 58, p. 102717, 2021

  16. [16]

    Responsible generative ai: What to generate and what not,

    J. Gu, “Responsible generative ai: What to generate and what not,”arXiv preprint arXiv:2404.05783, 2024

  17. [17]

    Adversarial attacks and defenses in deep learning: From a perspective of cybersecurity,

    S. Zhou, C. Liu, D. Ye, T. Zhu, W. Zhou, and P. S. Yu, “Adversarial attacks and defenses in deep learning: From a perspective of cybersecurity,”ACM Comput. Surv., vol. 55, no. 8, pp. 163:1–163:39, dec 2022. [Online]. Available: https://doi.org/10.1145/3547330

  18. [18]

    Cnn- based projected gradient descent for consistent ct image reconstruction,

    H. Gupta, K. H. Jin, H. Q. Nguyen, M. T. McCann, and M. Unser, “Cnn- based projected gradient descent for consistent ct image reconstruction,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1440–1453, 2018

  19. [19]

    Multi- loss regularized deep neural network,

    C. Xu, C. Lu, X. Liang, J. Gao, W. Zheng, T. Wang, and S. Yan, “Multi- loss regularized deep neural network,”IEEE Transactions on Circuits and Systems for Video Technology, vol. 26, no. 12, pp. 2273–2283, 2015

  20. [20]

    Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks

    W. Xu, D. Evans, and Y . Qi, “Feature squeezing: Detecting adversarial examples in deep neural networks,”arXiv preprint arXiv:1704.01155, 2017

  21. [21]

    Efficient defenses against adversarial attacks,

    V . Zantedeschi, M.-I. Nicolae, and A. Rawat, “Efficient defenses against adversarial attacks,” inProceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 2017, pp. 39–49

  22. [22]

    Adversarial machine learning- industry perspectives,

    R. S. S. Kumar, M. Nystr ¨om, J. Lambert, A. Marshall, M. Goertzel, A. Comissoneru, M. Swann, and S. Xia, “Adversarial machine learning- industry perspectives,” in2020 IEEE Security and Privacy Workshops (SPW). IEEE, 2020, pp. 69–75

  23. [23]

    I Never Thought About Securing My Machine Learning Systems: A study of security and privacy awareness of machine learning practitioners,

    F. Boenisch, V . Battis, N. Buchmann, and M. Poikela, “I Never Thought About Securing My Machine Learning Systems: A study of security and privacy awareness of machine learning practitioners,” inProceedings of Mensch Und Computer 2021, 2021, pp. 520–546

  24. [24]

    WiCyS 2023 annual confer- ence,

    Women in CyberSecurity (WiCyS), “WiCyS 2023 annual confer- ence,” https://www.wicys.org/events/wicys-2023/, Denver, CO, USA, Mar. 2023, march 16–18, 2023, Gaylord Rockies Resort and Convention Center

  25. [25]

    Qualtrics: Experience management software,

    Qualtrics, “Qualtrics: Experience management software,” http://www.qualtrics.com/, 2024, [Online; accessed 26-May-2024]

  26. [26]

    Decision support system on computer main- tenance management system using association rule and fisher exact test one side p-value,

    F. Sukmana and F. Rozi, “Decision support system on computer main- tenance management system using association rule and fisher exact test one side p-value,”TELKOMNIKA (Telecommunication Computing Electronics and Control), vol. 15, no. 4, pp. 1841–1851, 2017

  27. [27]

    Trustworthy ai and corporate governance: The eu’s ethics guidelines for trustworthy artificial intelligence from a company law perspective,

    E. Hickman and M. Petrin, “Trustworthy ai and corporate governance: The eu’s ethics guidelines for trustworthy artificial intelligence from a company law perspective,”European Business Organization Law Review, vol. 22, pp. 593–625, 2021

  28. [28]

    All that glitters is not gold: Trustworthy and ethical ai principles,

    C. Rees and B. M ¨uller, “All that glitters is not gold: Trustworthy and ethical ai principles,”AI and Ethics, vol. 3, no. 4, pp. 1241–1254, 2023

  29. [29]

    “If security is required

    N. K. Gopalakrishna, D. Anandayuvaraj, A. Detti, F. L. Bland, S. Ra- haman, and J. C. Davis, ““If security is required”: engineering and se- curity practices for machine learning-based iot devices,” inProceedings of the 4th International Workshop on Software Engineering Research and Practice for the IoT, 2022, pp. 1–8

  30. [30]

    Teaching the principles of the hacker curriculum to undergraduates,

    S. Bratus, A. Shubina, and M. E. Locasto, “Teaching the principles of the hacker curriculum to undergraduates,” inProceedings of the 41st ACM Technical Symposium on Computer Science Education, 2010, pp. 122–126

  31. [31]

    Pwn the learning curve: Education- first ctf challenges,

    C. Nelson and Y . Shoshitaishvili, “Pwn the learning curve: Education- first ctf challenges,” inProceedings of the 55th ACM Technical Sympo- sium on Computer Science Education V . 1, 2024, pp. 937–943

  32. [32]

    Leveraging competitive gamification for sustainable fun and profit in security education,

    A. Dabrowski, M. Kammerstetter, E. Thamm, E. Weippl, and W. Kast- ner, “Leveraging competitive gamification for sustainable fun and profit in security education,” in2015 USENIX Summit on Gaming, Games, and Gamification in Security Education (3GSE 15), 2015

  33. [33]

    Using capture-the-flag to enhance the effectiveness of cybersecurity education,

    K. Leune and S. J. P. Jr., “Using capture-the-flag to enhance the effectiveness of cybersecurity education,” inProceedings of the 18th Annual Conference on Information Technology Education, 2017, pp. 47– 52

  34. [34]

    Red-teaming for generative ai: Silver bullet or security theater?

    M. Feffer, A. Sinha, Z. C. Lipton, and H. Heidari, “Red-teaming for generative ai: Silver bullet or security theater?”arXiv preprint arXiv:2401.15897, 2024

  35. [35]

    Usability evaluation of open source and online capture the flag platforms,

    M. H. bin Noor Azam and R. Beuran, “Usability evaluation of open source and online capture the flag platforms,”Graduate School of Infor- mation Science, Japan Advanced Institute of Science and Technology, 2018

  36. [36]

    Beginning with flask,

    K. Relan, “Beginning with flask,”Building REST APIs with Flask: Create Python Web Services with MySQL, pp. 1–26, 2019

  37. [37]

    Natural language processing,

    K. Chowdhary, “Natural language processing,”Fundamentals of Artifi- cial Intelligence, pp. 603–649, 2020

  38. [38]

    Hardeniya,NLTK Essentials

    N. Hardeniya,NLTK Essentials. Packt Publishing, 2015

  39. [39]

    A study on sigmoid kernels for svm and the training of non-psd kernels by smo-type methods,

    H.-T. Lin and C.-J. Lin, “A study on sigmoid kernels for svm and the training of non-psd kernels by smo-type methods,”Neural Comput, vol. 3, no. 1-32, p. 16, 2003

  40. [40]

    Build it, break it, fix it: Contesting secure development,

    A. Ruef, M. Hicks, J. Parker, D. Levin, M. L. Mazurek, and P. Mardziel, “Build it, break it, fix it: Contesting secure development,” inProceedings of the 2016 ACM SIGSAC Conference on Computer and Communica- tions Security, 2016, pp. 690–703

  41. [41]

    Likert scale: Explored and explained,

    A. Joshi, S. Kale, S. Chandel, and D. K. Pal, “Likert scale: Explored and explained,”British Journal of Applied Science & Technology, vol. 7, no. 4, pp. 396–403, 2015

  42. [42]

    Review and comparative analysis of machine learning libraries for machine learning,

    M. N. Gevorkyan, A. V . Demidova, T. S. Demidova, and A. A. Sobolev, “Review and comparative analysis of machine learning libraries for machine learning,”Discrete and Continuous Models and Applied Computational Science, vol. 27, no. 4, pp. 305–315, 2019. APPENDIX A. Questionnaire for first study Q1. How much experience do you have in the field of cybersec...

  43. [43]

    Challenge - 1 (15 minutes): Q15. Challenge - 1 questions •In your opinion rate the following on a scale of 1 to 5 (in terms of your understanding) where 1 - Very difficult, 2 - Difficult, 3 - Neutral, 4 - Easy, 5 - Very easy: –The overall challenge –Language of the question / CTF description –The solution –Formulating the approach •Was the time provided j...

  44. [44]

    Challenge - 2 (15 minutes): Q16. Challenge - 2 questions •In your opinion rate the following on a scale of 1 to 5 (in terms of your understanding) where 1 - Very difficult, 2 - Difficult, 3 - Neutral, 4 - Easy, 5 - Very easy: –The overall challenge –Language of the question / CTF description –The solution –Formulating the approach •Was the time provided j...

  45. [45]

    General Post-Challenge Questions: Q17. In your opinion, rate the following on a scale of 1 to 5 where 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree •These challenges have improved my knowledge on security •These challenges have improved my knowledge on adversarial machine learning •These challenges have helped me learn so...

  46. [46]

    Here, F1=Extremely rele- vant, F2=Moderately relevant, F3=Relevant, F4=Not Relevant

    First Study:Statistical analysis of correlation between experience in cybersecurity and participants’ belief regarding relevance of cybersecurity in ML. Here, F1=Extremely rele- vant, F2=Moderately relevant, F3=Relevant, F4=Not Relevant. TABLE IV FISHER’S EXACT TEST FORQ1ANDQ13 F1 F2 F3 F4 Total No experience 1 1 1 0 3 Roughly one year 1 1 0 0 2 One to le...

  47. [47]

    Second Study:The correlation factor between Q2 and Q28: α= 9!·2!·4!·8!·3!·2! 15!·5!·3!·3! = 0.006 TABLE XII FISHER’S EXACT TEST FORHYPOTHESIS- 2 Yes No Neutral Total Used CTF before 8 2 4 14 Not Used CTF before 1 0 0 1 Column Total 9 2 4 15 The correlation factor between Q7 and Q28: α= 9!·2!·4!·14! 15!·8!·2!·4! = 0.6