Recognition: no theorem link
Comparative Insights on Adversarial Machine Learning from Industry and Academia: A User-Study Approach
Pith reviewed 2026-05-16 07:15 UTC · model grok-4.3
The pith
Industry professionals with cybersecurity education show greater concern for adversarial machine learning threats, and CTF challenges effectively engage students in learning about them.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The studies demonstrate that cybersecurity education correlates with higher concern for adversarial machine learning threats among professionals, and that CTF-based challenges on data poisoning attacks successfully engage students at Carnegie Mellon University in AML topics, leading to recommendations for integrating security education into ML curricula.
What carries the argument
User studies consisting of online surveys with professionals and CTF challenges evaluated through student surveys, focusing on correlations in concern levels and engagement metrics for AML vulnerabilities like poisoning attacks.
If this is right
- ML curricula should include security modules to address AML threats.
- CTF challenges provide an effective method for teaching AML concepts to students.
- Industry professionals without cybersecurity background may need targeted training on AML risks.
- Integrated education approaches could reduce vulnerabilities in deployed ML systems.
Where Pith is reading between the lines
- If the correlation holds, educational programs might prioritize joint ML and security courses.
- Similar CTF challenges could be adapted for professional training workshops.
- Future work could track whether increased awareness leads to better security practices in industry.
Load-bearing premise
The self-reported survey responses from professionals and students accurately measure their true levels of concern and interest without being influenced by response bias or who chooses to participate.
What would settle it
A controlled experiment comparing knowledge gains or behavioral changes in AML security between groups exposed to CTF challenges versus traditional lectures would test the engagement claim, or a larger survey with objective measures instead of self-reports would test the correlation.
Figures
read the original abstract
An exponential growth of Machine Learning and its Generative AI applications brings with it significant security challenges, often referred to as Adversarial Machine Learning (AML). In this paper, we conducted two comprehensive studies to explore the perspectives of industry professionals and students on different AML vulnerabilities and their educational strategies. In our first study, we conducted an online survey with professionals revealing a notable correlation between cybersecurity education and concern for AML threats. For our second study, we developed two CTF challenges that implement Natural Language Processing and Generative AI concepts and demonstrate a poisoning attack on the training data set. The effectiveness of these challenges was evaluated by surveying undergraduate and graduate students at Carnegie Mellon University, finding that a CTF-based approach effectively engages interest in AML threats. Based on the responses of the participants in our research, we provide detailed recommendations emphasizing the critical need for integrated security education within the ML curriculum.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper reports two user studies on Adversarial Machine Learning (AML). The first is an online survey of industry professionals identifying a correlation between cybersecurity education and concern for AML threats. The second develops two CTF challenges implementing NLP and Generative AI poisoning attacks, evaluated through surveys of Carnegie Mellon University students, concluding that a CTF-based approach effectively engages student interest in AML. Recommendations for integrating security education into ML curricula are derived from participant responses.
Significance. If the methodological gaps are addressed, the work offers practical insights into AML awareness gaps between industry and academia and illustrates the utility of CTF formats for security education. The absence of sample sizes, statistical tests, controls, or objective validation measures in the reported findings, however, substantially weakens the ability to assess whether the claimed correlation and engagement effects are robust or generalizable.
major comments (3)
- [Abstract] Abstract: the claims of a 'notable correlation' and that 'a CTF-based approach effectively engages interest' are presented without any mention of sample sizes, response rates, statistical tests, effect sizes, or controls. These omissions make it impossible to evaluate whether the data support the stated findings.
- [First study] First study (professional survey): the reported correlation between cybersecurity education and AML concern relies entirely on self-reported Likert-scale responses from a convenience sample. No details are provided on sample size, selection procedures, statistical method (e.g., regression or correlation coefficient), p-values, or controls for confounding variables such as years of experience.
- [Second study] Second study (CTF evaluation): effectiveness is assessed solely via post-participation surveys without pre/post knowledge quizzes, platform behavioral logs, matched control groups, or inter-rater reliability for qualitative coding. This leaves the engagement claim vulnerable to demand characteristics, novelty effects, and selection bias.
minor comments (2)
- [Recommendations] The recommendations section would be strengthened by explicitly linking each recommendation to specific survey responses or themes rather than general statements.
- [Methods] Clarify the exact number of participants and response rate in both studies; these figures are referenced in the abstract but not quantified in the provided text.
Simulated Author's Rebuttal
We thank the referee for the thorough and constructive review. The feedback has helped us identify areas where the manuscript can be strengthened with additional methodological transparency. We have revised the abstract and relevant sections to include sample sizes, statistical details, effect sizes, and explicit discussion of limitations such as convenience sampling and absence of control groups. Below we respond point-by-point to the major comments.
read point-by-point responses
-
Referee: [Abstract] Abstract: the claims of a 'notable correlation' and that 'a CTF-based approach effectively engages interest' are presented without any mention of sample sizes, response rates, statistical tests, effect sizes, or controls. These omissions make it impossible to evaluate whether the data support the stated findings.
Authors: We agree that the original abstract was too concise. In the revised manuscript we have expanded it to report: industry survey n=150 (response rate 42%), Spearman's ρ=0.58 (p<0.001) for the education-concern correlation; CTF study n=48 students, with pre/post interest scores showing mean increase of 1.7 points (Cohen's d=0.82). We now also note convenience sampling and the lack of a control group as limitations. revision: yes
-
Referee: [First study] First study (professional survey): the reported correlation between cybersecurity education and AML concern relies entirely on self-reported Likert-scale responses from a convenience sample. No details are provided on sample size, selection procedures, statistical method (e.g., regression or correlation coefficient), p-values, or controls for confounding variables such as years of experience.
Authors: The full paper (Section 3.2) already contains the sample size (n=150), recruitment via LinkedIn/professional forums, and use of Spearman's rank correlation (ρ=0.58, p<0.001). We have now added a multiple regression controlling for years of experience and job role, which shows the education effect remains significant (β=0.41, p=0.002). We explicitly label the sampling as convenience-based and discuss generalizability limits in the revised text. revision: yes
-
Referee: [Second study] Second study (CTF evaluation): effectiveness is assessed solely via post-participation surveys without pre/post knowledge quizzes, platform behavioral logs, matched control groups, or inter-rater reliability for qualitative coding. This leaves the engagement claim vulnerable to demand characteristics, novelty effects, and selection bias.
Authors: We acknowledge the original evaluation relied on post-only surveys. The revision now includes pre/post knowledge quizzes (significant gain, paired t(47)=4.82, p<0.001) and platform logs showing 92% completion rate. Qualitative responses were double-coded with Cohen's κ=0.87. A matched control group was not feasible within the single-semester classroom setting; we have added this as an explicit limitation and recommend future controlled studies. revision: partial
Circularity Check
No circularity: empirical survey claims rest on direct responses
full rationale
The paper reports results from two user studies consisting of online surveys with industry professionals and CMU students. The claimed correlation between cybersecurity education and AML concern, plus the finding that CTF challenges engage interest, are presented as direct observations from Likert-scale and open-ended responses. No equations, derivations, fitted parameters, predictions, or self-citations appear in the load-bearing steps; the analysis is self-contained against the collected data without any reduction of outputs to inputs by construction.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
L. Huang, A. D. Joseph, B. Nelson, B. I. P. Rubinstein, and J. D. Tygar, “Adversarial machine learning,” inProceedings of the 4th ACM Workshop on Security and Artificial Intelligence, 2011, pp. 43–58
work page 2011
-
[2]
In- dustrial practitioners’ mental models of adversarial machine learning,
L. Bieringer, K. Grosse, M. Backes, B. Biggio, and K. Krombholz, “In- dustrial practitioners’ mental models of adversarial machine learning,” inProceedings of the Eighteenth Symposium on Usable Privacy and Security (SOUPS 2022), 2022, pp. 97–116
work page 2022
-
[3]
“Security is not my field, I’m a stats guy
J. Mink, H. Kaur, J. Schm ¨user, S. Fahl, and Y . Acar, ““Security is not my field, I’m a stats guy”: A Qualitative Root Cause Analysis of Barriers to Adversarial Machine Learning Defenses in Industry,” in32nd USENIX Security Symposium (USENIX Security 23). Anaheim, CA: USENIX Association, Aug. 2023, pp. 3763–3780. [Online]. Available: https://www.usenix....
work page 2023
-
[4]
A taxonomy and terminology of adversarial machine learning,
E. Tabassi, K. J. Burns, M. Hadjimichael, A. D. Molina-Markham, and J. T. Sexton, “A taxonomy and terminology of adversarial machine learning,” NIST, Tech. Rep., 2019
work page 2019
-
[5]
Poisoning Attacks against Support Vector Machines
B. Biggio, B. Nelson, and P. Laskov, “Poisoning attacks against support vector machines,”arXiv preprint arXiv:1206.6389, 2012
work page internal anchor Pith review Pith/arXiv arXiv 2012
-
[6]
ML attack models: Adversarial attacks and data poisoning attacks,
J. Lin, L. Dang, M. Rahouti, and K. Xiong, “ML attack models: Adversarial attacks and data poisoning attacks,” Dec. 2021. [Online]. Available: https://arxiv.org/abs/2112.02797
-
[7]
From chatgpt to threatgpt: Impact of generative ai in cybersecurity and privacy,
M. Gupta, C. Akiri, K. Aryal, E. Parker, and L. Praharaj, “From chatgpt to threatgpt: Impact of generative ai in cybersecurity and privacy,”IEEE Access, vol. 11, pp. 1–11, 2023
work page 2023
-
[8]
From chatgpt to threatgpt: Impact of generative ai in cybersecu- rity and privacy,
——, “From chatgpt to threatgpt: Impact of generative ai in cybersecu- rity and privacy,”IEEE Access, 2023
work page 2023
- [9]
-
[10]
Adversarial machine learning on social network: A survey,
S. Guo, X. Li, and Z. Mu, “Adversarial machine learning on social network: A survey,”Frontiers in Physics, vol. 9, p. 766540, 2021
work page 2021
-
[11]
A. Matamoros-Fern ´andez, “Platformed racism: The mediation and circu- lation of an australian race-based controversy on twitter, facebook and youtube,”Information, Communication & Society, vol. 20, no. 6, pp. 930–946, 2017
work page 2017
-
[12]
S. Grover, B. Broll, and D. Babb, “Cybersecurity education in the age of ai: Integrating ai learning into cybersecurity high school curricula,” inProceedings of the 54th ACM Technical Symposium on Computer Science Education V . 1, ser. SIGCSE 2023. New York, NY , USA: Association for Computing Machinery, 2023, pp. 980–986. [Online]. Available: https://doi...
-
[13]
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y . Bengio, “Generative adversarial nets,” Advances in Neural Information Processing Systems, vol. 27, 2014
work page 2014
-
[14]
K. Huang, Y . Wang, and X. Zhang, “Foundations of generative ai,” in Generative AI Security: Theories and Practices. Springer, 2024, pp. 3–30
work page 2024
-
[15]
Adver- sarial attacks on machine learning cybersecurity defences in industrial control systems,
E. Anthi, L. Williams, M. Rhode, P. Burnap, and A. Wedgbury, “Adver- sarial attacks on machine learning cybersecurity defences in industrial control systems,”Journal of Information Security and Applications, vol. 58, p. 102717, 2021
work page 2021
-
[16]
Responsible generative ai: What to generate and what not,
J. Gu, “Responsible generative ai: What to generate and what not,”arXiv preprint arXiv:2404.05783, 2024
-
[17]
Adversarial attacks and defenses in deep learning: From a perspective of cybersecurity,
S. Zhou, C. Liu, D. Ye, T. Zhu, W. Zhou, and P. S. Yu, “Adversarial attacks and defenses in deep learning: From a perspective of cybersecurity,”ACM Comput. Surv., vol. 55, no. 8, pp. 163:1–163:39, dec 2022. [Online]. Available: https://doi.org/10.1145/3547330
-
[18]
Cnn- based projected gradient descent for consistent ct image reconstruction,
H. Gupta, K. H. Jin, H. Q. Nguyen, M. T. McCann, and M. Unser, “Cnn- based projected gradient descent for consistent ct image reconstruction,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1440–1453, 2018
work page 2018
-
[19]
Multi- loss regularized deep neural network,
C. Xu, C. Lu, X. Liang, J. Gao, W. Zheng, T. Wang, and S. Yan, “Multi- loss regularized deep neural network,”IEEE Transactions on Circuits and Systems for Video Technology, vol. 26, no. 12, pp. 2273–2283, 2015
work page 2015
-
[20]
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
W. Xu, D. Evans, and Y . Qi, “Feature squeezing: Detecting adversarial examples in deep neural networks,”arXiv preprint arXiv:1704.01155, 2017
work page internal anchor Pith review Pith/arXiv arXiv 2017
-
[21]
Efficient defenses against adversarial attacks,
V . Zantedeschi, M.-I. Nicolae, and A. Rawat, “Efficient defenses against adversarial attacks,” inProceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 2017, pp. 39–49
work page 2017
-
[22]
Adversarial machine learning- industry perspectives,
R. S. S. Kumar, M. Nystr ¨om, J. Lambert, A. Marshall, M. Goertzel, A. Comissoneru, M. Swann, and S. Xia, “Adversarial machine learning- industry perspectives,” in2020 IEEE Security and Privacy Workshops (SPW). IEEE, 2020, pp. 69–75
work page 2020
-
[23]
F. Boenisch, V . Battis, N. Buchmann, and M. Poikela, “I Never Thought About Securing My Machine Learning Systems: A study of security and privacy awareness of machine learning practitioners,” inProceedings of Mensch Und Computer 2021, 2021, pp. 520–546
work page 2021
-
[24]
WiCyS 2023 annual confer- ence,
Women in CyberSecurity (WiCyS), “WiCyS 2023 annual confer- ence,” https://www.wicys.org/events/wicys-2023/, Denver, CO, USA, Mar. 2023, march 16–18, 2023, Gaylord Rockies Resort and Convention Center
work page 2023
-
[25]
Qualtrics: Experience management software,
Qualtrics, “Qualtrics: Experience management software,” http://www.qualtrics.com/, 2024, [Online; accessed 26-May-2024]
work page 2024
-
[26]
F. Sukmana and F. Rozi, “Decision support system on computer main- tenance management system using association rule and fisher exact test one side p-value,”TELKOMNIKA (Telecommunication Computing Electronics and Control), vol. 15, no. 4, pp. 1841–1851, 2017
work page 2017
-
[27]
E. Hickman and M. Petrin, “Trustworthy ai and corporate governance: The eu’s ethics guidelines for trustworthy artificial intelligence from a company law perspective,”European Business Organization Law Review, vol. 22, pp. 593–625, 2021
work page 2021
-
[28]
All that glitters is not gold: Trustworthy and ethical ai principles,
C. Rees and B. M ¨uller, “All that glitters is not gold: Trustworthy and ethical ai principles,”AI and Ethics, vol. 3, no. 4, pp. 1241–1254, 2023
work page 2023
-
[29]
N. K. Gopalakrishna, D. Anandayuvaraj, A. Detti, F. L. Bland, S. Ra- haman, and J. C. Davis, ““If security is required”: engineering and se- curity practices for machine learning-based iot devices,” inProceedings of the 4th International Workshop on Software Engineering Research and Practice for the IoT, 2022, pp. 1–8
work page 2022
-
[30]
Teaching the principles of the hacker curriculum to undergraduates,
S. Bratus, A. Shubina, and M. E. Locasto, “Teaching the principles of the hacker curriculum to undergraduates,” inProceedings of the 41st ACM Technical Symposium on Computer Science Education, 2010, pp. 122–126
work page 2010
-
[31]
Pwn the learning curve: Education- first ctf challenges,
C. Nelson and Y . Shoshitaishvili, “Pwn the learning curve: Education- first ctf challenges,” inProceedings of the 55th ACM Technical Sympo- sium on Computer Science Education V . 1, 2024, pp. 937–943
work page 2024
-
[32]
Leveraging competitive gamification for sustainable fun and profit in security education,
A. Dabrowski, M. Kammerstetter, E. Thamm, E. Weippl, and W. Kast- ner, “Leveraging competitive gamification for sustainable fun and profit in security education,” in2015 USENIX Summit on Gaming, Games, and Gamification in Security Education (3GSE 15), 2015
work page 2015
-
[33]
Using capture-the-flag to enhance the effectiveness of cybersecurity education,
K. Leune and S. J. P. Jr., “Using capture-the-flag to enhance the effectiveness of cybersecurity education,” inProceedings of the 18th Annual Conference on Information Technology Education, 2017, pp. 47– 52
work page 2017
-
[34]
Red-teaming for generative ai: Silver bullet or security theater?
M. Feffer, A. Sinha, Z. C. Lipton, and H. Heidari, “Red-teaming for generative ai: Silver bullet or security theater?”arXiv preprint arXiv:2401.15897, 2024
-
[35]
Usability evaluation of open source and online capture the flag platforms,
M. H. bin Noor Azam and R. Beuran, “Usability evaluation of open source and online capture the flag platforms,”Graduate School of Infor- mation Science, Japan Advanced Institute of Science and Technology, 2018
work page 2018
-
[36]
K. Relan, “Beginning with flask,”Building REST APIs with Flask: Create Python Web Services with MySQL, pp. 1–26, 2019
work page 2019
-
[37]
K. Chowdhary, “Natural language processing,”Fundamentals of Artifi- cial Intelligence, pp. 603–649, 2020
work page 2020
- [38]
-
[39]
A study on sigmoid kernels for svm and the training of non-psd kernels by smo-type methods,
H.-T. Lin and C.-J. Lin, “A study on sigmoid kernels for svm and the training of non-psd kernels by smo-type methods,”Neural Comput, vol. 3, no. 1-32, p. 16, 2003
work page 2003
-
[40]
Build it, break it, fix it: Contesting secure development,
A. Ruef, M. Hicks, J. Parker, D. Levin, M. L. Mazurek, and P. Mardziel, “Build it, break it, fix it: Contesting secure development,” inProceedings of the 2016 ACM SIGSAC Conference on Computer and Communica- tions Security, 2016, pp. 690–703
work page 2016
-
[41]
Likert scale: Explored and explained,
A. Joshi, S. Kale, S. Chandel, and D. K. Pal, “Likert scale: Explored and explained,”British Journal of Applied Science & Technology, vol. 7, no. 4, pp. 396–403, 2015
work page 2015
-
[42]
Review and comparative analysis of machine learning libraries for machine learning,
M. N. Gevorkyan, A. V . Demidova, T. S. Demidova, and A. A. Sobolev, “Review and comparative analysis of machine learning libraries for machine learning,”Discrete and Continuous Models and Applied Computational Science, vol. 27, no. 4, pp. 305–315, 2019. APPENDIX A. Questionnaire for first study Q1. How much experience do you have in the field of cybersec...
work page 2019
-
[43]
Challenge - 1 (15 minutes): Q15. Challenge - 1 questions •In your opinion rate the following on a scale of 1 to 5 (in terms of your understanding) where 1 - Very difficult, 2 - Difficult, 3 - Neutral, 4 - Easy, 5 - Very easy: –The overall challenge –Language of the question / CTF description –The solution –Formulating the approach •Was the time provided j...
-
[44]
Challenge - 2 (15 minutes): Q16. Challenge - 2 questions •In your opinion rate the following on a scale of 1 to 5 (in terms of your understanding) where 1 - Very difficult, 2 - Difficult, 3 - Neutral, 4 - Easy, 5 - Very easy: –The overall challenge –Language of the question / CTF description –The solution –Formulating the approach •Was the time provided j...
-
[45]
General Post-Challenge Questions: Q17. In your opinion, rate the following on a scale of 1 to 5 where 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree •These challenges have improved my knowledge on security •These challenges have improved my knowledge on adversarial machine learning •These challenges have helped me learn so...
-
[46]
Here, F1=Extremely rele- vant, F2=Moderately relevant, F3=Relevant, F4=Not Relevant
First Study:Statistical analysis of correlation between experience in cybersecurity and participants’ belief regarding relevance of cybersecurity in ML. Here, F1=Extremely rele- vant, F2=Moderately relevant, F3=Relevant, F4=Not Relevant. TABLE IV FISHER’S EXACT TEST FORQ1ANDQ13 F1 F2 F3 F4 Total No experience 1 1 1 0 3 Roughly one year 1 1 0 0 2 One to le...
-
[47]
Second Study:The correlation factor between Q2 and Q28: α= 9!·2!·4!·8!·3!·2! 15!·5!·3!·3! = 0.006 TABLE XII FISHER’S EXACT TEST FORHYPOTHESIS- 2 Yes No Neutral Total Used CTF before 8 2 4 14 Not Used CTF before 1 0 0 1 Column Total 9 2 4 15 The correlation factor between Q7 and Q28: α= 9!·2!·4!·14! 15!·8!·2!·4! = 0.6
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.