Recognition: no theorem link
Learning AI Without a STEM Background: Mixed-Methods Evidence from a Diverse, Mixed-Cohort AIED Program
Pith reviewed 2026-05-14 23:27 UTC · model grok-4.3
The pith
A mixed-cohort AI program produces significant gains in confidence and relevance for non-STEM undergraduates and adult learners by centering ethical judgment and contextual reasoning.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper presents and evaluates a mixed-cohort AIED program that integrates non-STEM undergraduates and adult learners around ethical reasoning, socio-technical judgment, and applied AI literacy rather than technical proficiency. Mixed-methods evidence from surveys, open-ended reflections, and educator reports documents significant gains in confidence and perceived relevance of AI, with consistent qualitative emphasis on responsibility, judgment, and contextual reasoning over technical mastery. Instructors and mentors observed high engagement especially in dialogic and scenario-based activities, supporting the claim that human-centered instructional supports are essential for equitable AIED
What carries the argument
The mixed-cohort instructional model that uses ethical scaffolding, near-peer mentorship, and structured discussion to support heterogeneous learner populations without requiring prior STEM preparation.
If this is right
- Ethical judgment should be treated as a core learning outcome in AI education alongside literacy.
- Human-centered supports such as ethical scaffolding and mentorship are required for productive learning in non-traditional and mixed groups.
- The model expands viable AI education pathways into policy and workforce-adjacent contexts.
- Dialogic and scenario-based activities produce high engagement across cohorts according to instructor reports.
Where Pith is reading between the lines
- The same scaffolding approach could be tested in other technical domains to widen participation beyond STEM majors.
- Programs that measure only technical skills may miss the contextual reasoning gains documented here.
- Longer-term tracking of career or policy decisions by participants would test whether the reported confidence translates into sustained behavior change.
Load-bearing premise
Self-reported survey gains and open-ended reflections accurately measure meaningful learning rather than being inflated by social desirability bias, program novelty, or the lack of a control group.
What would settle it
A follow-up randomized trial that adds blinded objective tests of AI concepts and ethical reasoning before and after the program and finds no difference in learning outcomes between the mixed-cohort group and a traditional technical AI course.
read the original abstract
Despite growing interest in AI education, most AIED initiatives remain narrowly targeted toward STEM-prepared students, limiting participation by non-STEM learners and adults seeking to engage with AI in public-interest, policy, or workforce contexts. This paper presents and evaluates an NSF-funded, innovative mixed-cohort AI education model that intentionally integrates non-STEM undergraduates and adult learners into a shared learning environment centered on ethical reasoning, socio-technical judgment, and applied AI literacy rather than technical proficiency alone. Drawing on mixed-methods data from course surveys, open-ended reflections, and educator reports, we examine learners' academic agency, confidence navigating AI concepts, critical engagement with ethical tradeoffs, and perceived expansion of postsecondary and career trajectories. Quantitative results indicate significant gains in confidence and perceived relevance of AI across cohorts' participants, while qualitative analyses reveal a consistent emphasis on responsibility, judgment, and contextual reasoning over technical mastery. Instructors and near-peer mentors corroborated high levels of engagement and productive challenge, particularly in dialogic and scenario-based learning activities. Our findings suggest that human-centered instructional supports, such as ethical scaffolding, mentorship, and structured discussion, are essential components of equitable AI education, especially in heterogeneous and non-traditional learner populations. We argue that ethical judgment should be treated as a core learning outcome in AIED alongside AI literacy, and we offer design implications for expanding access to AI education in policy-relevant and workforce-adjacent contexts.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript presents an NSF-funded mixed-cohort AI education model integrating non-STEM undergraduates and adult learners in a shared environment focused on ethical reasoning, socio-technical judgment, and applied AI literacy. Drawing on mixed-methods data from course surveys, open-ended reflections, and educator reports, it reports significant quantitative gains in confidence and perceived relevance of AI, with qualitative analyses showing consistent emphasis on responsibility, judgment, and contextual reasoning over technical mastery. The authors conclude that human-centered instructional supports such as ethical scaffolding and mentorship are essential for equitable AI education in heterogeneous populations and argue that ethical judgment should be treated as a core learning outcome alongside AI literacy.
Significance. If the empirical claims are substantiated with full methodological transparency, the work could offer actionable design implications for expanding AI education to non-traditional and policy-oriented learners, addressing a documented gap in current AIED initiatives. The mixed-cohort approach and elevation of ethical judgment as a primary outcome represent a potentially useful contribution to discussions of inclusive AI literacy.
major comments (3)
- [Abstract] Abstract: the assertion of 'significant gains' in confidence and perceived relevance is presented without sample sizes, statistical tests, effect sizes, confidence intervals, or power analysis, rendering the central quantitative claim impossible to evaluate for robustness or practical importance.
- [Methods/Results] Methods/Results: the design relies exclusively on within-group pre/post self-report surveys and open-ended reflections with no control cohort, no objective AI-literacy items, and no discussion of social-desirability or expectancy effects; this directly undermines the inference that observed changes are attributable to the mixed-cohort or ethical-scaffolding features rather than regression to the mean or program novelty.
- [Qualitative analysis] Qualitative analysis: no information is supplied on coding procedures, codebook development, inter-rater reliability, or how themes of 'responsibility, judgment, and contextual reasoning' were derived and validated, which is load-bearing for the claim of consistent qualitative patterns across cohorts.
minor comments (2)
- [Abstract] Abstract: specify the number of participants, number of cohorts, and exact survey instruments used so readers can immediately contextualize the scale of the study.
- [Discussion] Discussion: briefly address potential limitations of self-report data and the absence of longitudinal follow-up on whether confidence gains translate to actual AI-related behaviors or decisions.
Simulated Author's Rebuttal
We thank the referee for their constructive and detailed feedback, which has helped us strengthen the transparency and rigor of our manuscript. We address each major comment below, indicating revisions where appropriate.
read point-by-point responses
-
Referee: [Abstract] Abstract: the assertion of 'significant gains' in confidence and perceived relevance is presented without sample sizes, statistical tests, effect sizes, confidence intervals, or power analysis, rendering the central quantitative claim impossible to evaluate for robustness or practical importance.
Authors: We agree that the abstract requires these quantitative details for proper evaluation. We have revised the abstract to report the sample size, the statistical tests performed (paired t-tests), effect sizes, confidence intervals, and power analysis results, drawing directly from the quantitative analyses already detailed in the results section of the manuscript. revision: yes
-
Referee: [Methods/Results] Methods/Results: the design relies exclusively on within-group pre/post self-report surveys and open-ended reflections with no control cohort, no objective AI-literacy items, and no discussion of social-desirability or expectancy effects; this directly undermines the inference that observed changes are attributable to the mixed-cohort or ethical-scaffolding features rather than regression to the mean or program novelty.
Authors: We acknowledge the limitations of the within-group pre/post design. We have added an expanded limitations subsection in the discussion that explicitly addresses regression to the mean, social-desirability bias, and expectancy effects. We explain that a control cohort was not feasible given the single-cohort structure of the NSF-funded program and ethical considerations around equitable access. The study prioritized self-reported confidence and perceived relevance aligned with its focus on agency and ethical judgment; we have noted the lack of objective AI-literacy measures as a limitation and a direction for future work. We cannot retroactively add a control cohort or objective items. revision: partial
-
Referee: [Qualitative analysis] Qualitative analysis: no information is supplied on coding procedures, codebook development, inter-rater reliability, or how themes of 'responsibility, judgment, and contextual reasoning' were derived and validated, which is load-bearing for the claim of consistent qualitative patterns across cohorts.
Authors: We have revised the methods section to provide full details on the qualitative analysis. This includes the iterative development of the codebook from the open-ended reflections, independent coding by two researchers, inter-rater reliability assessment, and the process by which themes of responsibility, judgment, and contextual reasoning were derived and validated against educator reports and cross-cohort patterns. revision: yes
- Absence of a control cohort and objective AI-literacy items, which are inherent to the completed study design and cannot be addressed retrospectively.
Circularity Check
No significant circularity: empirical claims rest directly on survey and reflection data without derivations or self-referential modeling.
full rationale
The paper reports mixed-methods results from course surveys, open-ended reflections, and educator reports on learner confidence, perceived relevance, and emphasis on responsibility/judgment. No equations, fitted parameters, predictions derived from inputs, or derivation chains appear anywhere in the text. All central claims are presented as direct outcomes of the collected empirical data rather than reductions to prior definitions, self-citations, or ansatzes. The absence of any mathematical or modeling structure means there are no load-bearing steps that could reduce to the paper's own inputs by construction.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Self-reported confidence and relevance scales validly measure meaningful AI learning outcomes in heterogeneous adult and non-STEM populations.
Reference graph
Works this paper leans on
-
[1]
Ahmed, F. (2024). The digital divide and AI in education: Addressing equity and accessibility. AI EDIFY Journal, 1(2), 12–23
work page 2024
-
[2]
In 2025 IEEE Frontiers in Education Conference (FIE) (pp
Amanuel,Y.,&Krugel,J.(2025).FromPlaytoProficiency:ASystematicLiterature Review on Learning Modalities and Transition Strategies in CS and AI Education. In 2025 IEEE Frontiers in Education Conference (FIE) (pp. 1–9). IEEE
work page 2025
- [3]
-
[4]
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101
work page 2006
-
[5]
Cai, C., Zhu, G., & Ma, M. (2025). A systemic review of AI for interdisciplinary learning. Education and Information Technologies, 30(7), 9641–9687
work page 2025
-
[6]
K., Ahmad, Z., Ismailov, M., & Sanusi, I
Chiu, T. K., Ahmad, Z., Ismailov, M., & Sanusi, I. T. (2024). What are artificial intelligence literacy and competency? Computers and Education Open, 6, 100171
work page 2024
- [7]
-
[8]
D’Ignazio, C. (2023). A Toolkit for Restorative/Transformative Data Science. MIT Press
work page 2023
-
[9]
Dunleavy, P., & Margetts, H. (2025). Data science, artificial intelligence and the third wave of digital era governance. Public Policy and Administration, 40(2), 185– 214
work page 2025
-
[10]
ExLENT.https://www.nsf.gov/funding/opportunities/ exlent-experiential-learning-emerging-novel-technologies, last accessed 2026/02/01
work page 2026
-
[11]
Faccia, A., Ridon, M., & Cavaliere, L. P. L. (2025). The AI Application for an AutomatedEducationSystem.InCorporateGovernance,Digitalization,andEnergy Transition (pp. 143–174). Apple Academic Press
work page 2025
-
[12]
Florea, N. V., & Croitoru, G. (2024). The Impact of AI-mediated Communication Strategies on Organization Performance. Journal of Self-Governance & Management Economics, 12(2)
work page 2024
-
[13]
Garrett, N., Beard, N., & Fiesler, C. (2020). More than “If Time Allows”: the role of ethics in AI education. In Proceedings of AAAI/ACM AIES (pp. 272–278)
work page 2020
-
[14]
Grootendorst, M. (2022). BERTopic: Neural topic modeling with a class-based TF-IDF procedure. arXiv:2203.05794. 2 To be disclosed after peer review. Learning AI without a STEM Background 13
work page internal anchor Pith review Pith/arXiv arXiv 2022
-
[15]
Hasan, M. R., & Khan, B. (2023). An AI-based intervention for improving under- graduate STEM learning. PloS One, 18(7), e0288844
work page 2023
-
[16]
He, W., Zhang, B., & Zhang, J. (2024). The impact of technology on the labor market. In ICFIED 2024 (pp. 498–504). Atlantis Press
work page 2024
-
[17]
Kasworm, C. E. (2010). Adult learners in a research university. Adult Education Quarterly, 60(2), 143–160
work page 2010
-
[18]
Kim, J., et al. (2025). Designing AI-powered learning: Adult learners’ expectations. Educational Technology Research and Development, 73(6), 3397–3421
work page 2025
-
[19]
Li, H. (2023). AI in education: Bridging the divide or widening the gap? Advances in Education, Humanities and Social Science Research, 8(1), 355
work page 2023
- [20]
-
[21]
Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. In Proceedings of CHI 2020 (pp. 1–16)
work page 2020
- [22]
-
[23]
Mallik, S., & Gangopadhyay, A. (2023). Proactive and reactive engagement of AI methods for education: a review. Frontiers in Artificial Intelligence, 6, 1151391
work page 2023
-
[24]
Nelson, L. K. (2020). Computational grounded theory: A methodological frame- work. Sociological Methods & Research, 49(1), 3–42
work page 2020
- [25]
-
[26]
Oschinski, M., Crawford, A., & Wu, M. (2024). AI and the future of workforce training. Center for Security and Emerging Technology
work page 2024
-
[27]
Osetskyi, V., et al. (2020). Artificial intelligence application in education: Financial implications and prospects. Financial and Credit Activity Problems of Theory and Practice, 2(33), 574–584
work page 2020
-
[28]
Oyetade, K., & Zuva, T. (2025). Advancing Equitable Education with Inclusive AI. Educational Process: International Journal, 14, e2025087
work page 2025
-
[29]
van der Linde, G., et al. (2025). Landscape of AI literacy in education: a narrative review. Discover Education, 4(1), 561
work page 2025
-
[30]
Poquet, O., & De Laat, M. (2021). Developing capabilities: Lifelong learning in the age of AI. British Journal of Educational Technology, 52(4), 1695–1708
work page 2021
-
[31]
Pedro, F., Subosa, M., Rivas, A., & Valverde, P. (2019). Artificial intelligence in education: Challenges and opportunities for sustainable development
work page 2019
-
[32]
Pradyutha, A. C. (2024). Cohort-based learning: Fostering collaborative education. Changing Landscape of Education
work page 2024
-
[33]
Shailja, S., Williams, T. J., & Pandey, A. (2025). Self-efficacy of high school stu- dents after an AI-focused pre-college program. In ASEE 2025
work page 2025
-
[34]
“Superskills” defined by NSF.https://www.nsf.gov/pubs/2019/nsf19518/ nsf19518.htm
work page 2019
-
[35]
Vindigni, G. (2025). Data-Driven Disparities: How AI applications in education may perpetuate or mitigate inequality. EJSMT, 1(4), 4–54
work page 2025
-
[36]
Weerakkody, S. U. Impact of cohort mix on student ex- perience and educational outcomes.https://ieaa.org.au/ common/Uploaded\%20files/Research\%20Publications/2021/ PUB-IEAA-Impact-of-Cohort-Mix-on-Student-Experience-and-Educational-Outcomes-Research-Digest-17. pdf
work page 2021
-
[37]
Yu, W. (2025). A Conceptual Framework for AI Literacy with a Focus on Compe- tency. International Journal of Artificial Intelligence in Education, 1–18. 14 V. Kuskova et al
work page 2025
-
[38]
Zhai, X., et al. (2020). Applying machine learning in science assessment: a system- atic review. Studies in Science Education, 56(1), 111–151
work page 2020
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.