pith. machine review for the scientific record. sign in

arxiv: 2604.20870 · v1 · submitted 2026-03-27 · 💻 cs.CY

Recognition: no theorem link

Learning AI Without a STEM Background: Mixed-Methods Evidence from a Diverse, Mixed-Cohort AIED Program

Authors on Pith no claims yet

Pith reviewed 2026-05-14 23:27 UTC · model grok-4.3

classification 💻 cs.CY
keywords AI educationnon-STEM learnersethical reasoningmixed-cohortAI literacysocio-technical judgmentadult learnersmixed methods
0
0 comments X

The pith

A mixed-cohort AI program produces significant gains in confidence and relevance for non-STEM undergraduates and adult learners by centering ethical judgment and contextual reasoning.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper evaluates an NSF-funded AI education model that deliberately mixes non-STEM undergraduates with adult learners in one classroom. Instruction prioritizes ethical reasoning, socio-technical judgment, and applied literacy over technical mastery. Quantitative survey data show clear increases in participants' confidence and sense that AI matters to their lives. Qualitative reflections and instructor reports consistently highlight responsibility and contextual decision-making as the main takeaways. The authors conclude that ethical judgment belongs alongside literacy as a core outcome and that targeted human supports make the approach workable for diverse groups.

Core claim

The paper presents and evaluates a mixed-cohort AIED program that integrates non-STEM undergraduates and adult learners around ethical reasoning, socio-technical judgment, and applied AI literacy rather than technical proficiency. Mixed-methods evidence from surveys, open-ended reflections, and educator reports documents significant gains in confidence and perceived relevance of AI, with consistent qualitative emphasis on responsibility, judgment, and contextual reasoning over technical mastery. Instructors and mentors observed high engagement especially in dialogic and scenario-based activities, supporting the claim that human-centered instructional supports are essential for equitable AIED

What carries the argument

The mixed-cohort instructional model that uses ethical scaffolding, near-peer mentorship, and structured discussion to support heterogeneous learner populations without requiring prior STEM preparation.

If this is right

  • Ethical judgment should be treated as a core learning outcome in AI education alongside literacy.
  • Human-centered supports such as ethical scaffolding and mentorship are required for productive learning in non-traditional and mixed groups.
  • The model expands viable AI education pathways into policy and workforce-adjacent contexts.
  • Dialogic and scenario-based activities produce high engagement across cohorts according to instructor reports.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same scaffolding approach could be tested in other technical domains to widen participation beyond STEM majors.
  • Programs that measure only technical skills may miss the contextual reasoning gains documented here.
  • Longer-term tracking of career or policy decisions by participants would test whether the reported confidence translates into sustained behavior change.

Load-bearing premise

Self-reported survey gains and open-ended reflections accurately measure meaningful learning rather than being inflated by social desirability bias, program novelty, or the lack of a control group.

What would settle it

A follow-up randomized trial that adds blinded objective tests of AI concepts and ethical reasoning before and after the program and finds no difference in learning outcomes between the mixed-cohort group and a traditional technical AI course.

read the original abstract

Despite growing interest in AI education, most AIED initiatives remain narrowly targeted toward STEM-prepared students, limiting participation by non-STEM learners and adults seeking to engage with AI in public-interest, policy, or workforce contexts. This paper presents and evaluates an NSF-funded, innovative mixed-cohort AI education model that intentionally integrates non-STEM undergraduates and adult learners into a shared learning environment centered on ethical reasoning, socio-technical judgment, and applied AI literacy rather than technical proficiency alone. Drawing on mixed-methods data from course surveys, open-ended reflections, and educator reports, we examine learners' academic agency, confidence navigating AI concepts, critical engagement with ethical tradeoffs, and perceived expansion of postsecondary and career trajectories. Quantitative results indicate significant gains in confidence and perceived relevance of AI across cohorts' participants, while qualitative analyses reveal a consistent emphasis on responsibility, judgment, and contextual reasoning over technical mastery. Instructors and near-peer mentors corroborated high levels of engagement and productive challenge, particularly in dialogic and scenario-based learning activities. Our findings suggest that human-centered instructional supports, such as ethical scaffolding, mentorship, and structured discussion, are essential components of equitable AI education, especially in heterogeneous and non-traditional learner populations. We argue that ethical judgment should be treated as a core learning outcome in AIED alongside AI literacy, and we offer design implications for expanding access to AI education in policy-relevant and workforce-adjacent contexts.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript presents an NSF-funded mixed-cohort AI education model integrating non-STEM undergraduates and adult learners in a shared environment focused on ethical reasoning, socio-technical judgment, and applied AI literacy. Drawing on mixed-methods data from course surveys, open-ended reflections, and educator reports, it reports significant quantitative gains in confidence and perceived relevance of AI, with qualitative analyses showing consistent emphasis on responsibility, judgment, and contextual reasoning over technical mastery. The authors conclude that human-centered instructional supports such as ethical scaffolding and mentorship are essential for equitable AI education in heterogeneous populations and argue that ethical judgment should be treated as a core learning outcome alongside AI literacy.

Significance. If the empirical claims are substantiated with full methodological transparency, the work could offer actionable design implications for expanding AI education to non-traditional and policy-oriented learners, addressing a documented gap in current AIED initiatives. The mixed-cohort approach and elevation of ethical judgment as a primary outcome represent a potentially useful contribution to discussions of inclusive AI literacy.

major comments (3)
  1. [Abstract] Abstract: the assertion of 'significant gains' in confidence and perceived relevance is presented without sample sizes, statistical tests, effect sizes, confidence intervals, or power analysis, rendering the central quantitative claim impossible to evaluate for robustness or practical importance.
  2. [Methods/Results] Methods/Results: the design relies exclusively on within-group pre/post self-report surveys and open-ended reflections with no control cohort, no objective AI-literacy items, and no discussion of social-desirability or expectancy effects; this directly undermines the inference that observed changes are attributable to the mixed-cohort or ethical-scaffolding features rather than regression to the mean or program novelty.
  3. [Qualitative analysis] Qualitative analysis: no information is supplied on coding procedures, codebook development, inter-rater reliability, or how themes of 'responsibility, judgment, and contextual reasoning' were derived and validated, which is load-bearing for the claim of consistent qualitative patterns across cohorts.
minor comments (2)
  1. [Abstract] Abstract: specify the number of participants, number of cohorts, and exact survey instruments used so readers can immediately contextualize the scale of the study.
  2. [Discussion] Discussion: briefly address potential limitations of self-report data and the absence of longitudinal follow-up on whether confidence gains translate to actual AI-related behaviors or decisions.

Simulated Author's Rebuttal

3 responses · 1 unresolved

We thank the referee for their constructive and detailed feedback, which has helped us strengthen the transparency and rigor of our manuscript. We address each major comment below, indicating revisions where appropriate.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the assertion of 'significant gains' in confidence and perceived relevance is presented without sample sizes, statistical tests, effect sizes, confidence intervals, or power analysis, rendering the central quantitative claim impossible to evaluate for robustness or practical importance.

    Authors: We agree that the abstract requires these quantitative details for proper evaluation. We have revised the abstract to report the sample size, the statistical tests performed (paired t-tests), effect sizes, confidence intervals, and power analysis results, drawing directly from the quantitative analyses already detailed in the results section of the manuscript. revision: yes

  2. Referee: [Methods/Results] Methods/Results: the design relies exclusively on within-group pre/post self-report surveys and open-ended reflections with no control cohort, no objective AI-literacy items, and no discussion of social-desirability or expectancy effects; this directly undermines the inference that observed changes are attributable to the mixed-cohort or ethical-scaffolding features rather than regression to the mean or program novelty.

    Authors: We acknowledge the limitations of the within-group pre/post design. We have added an expanded limitations subsection in the discussion that explicitly addresses regression to the mean, social-desirability bias, and expectancy effects. We explain that a control cohort was not feasible given the single-cohort structure of the NSF-funded program and ethical considerations around equitable access. The study prioritized self-reported confidence and perceived relevance aligned with its focus on agency and ethical judgment; we have noted the lack of objective AI-literacy measures as a limitation and a direction for future work. We cannot retroactively add a control cohort or objective items. revision: partial

  3. Referee: [Qualitative analysis] Qualitative analysis: no information is supplied on coding procedures, codebook development, inter-rater reliability, or how themes of 'responsibility, judgment, and contextual reasoning' were derived and validated, which is load-bearing for the claim of consistent qualitative patterns across cohorts.

    Authors: We have revised the methods section to provide full details on the qualitative analysis. This includes the iterative development of the codebook from the open-ended reflections, independent coding by two researchers, inter-rater reliability assessment, and the process by which themes of responsibility, judgment, and contextual reasoning were derived and validated against educator reports and cross-cohort patterns. revision: yes

standing simulated objections not resolved
  • Absence of a control cohort and objective AI-literacy items, which are inherent to the completed study design and cannot be addressed retrospectively.

Circularity Check

0 steps flagged

No significant circularity: empirical claims rest directly on survey and reflection data without derivations or self-referential modeling.

full rationale

The paper reports mixed-methods results from course surveys, open-ended reflections, and educator reports on learner confidence, perceived relevance, and emphasis on responsibility/judgment. No equations, fitted parameters, predictions derived from inputs, or derivation chains appear anywhere in the text. All central claims are presented as direct outcomes of the collected empirical data rather than reductions to prior definitions, self-citations, or ansatzes. The absence of any mathematical or modeling structure means there are no load-bearing steps that could reduce to the paper's own inputs by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on standard education-research assumptions that self-report instruments validly capture learning gains and that observed changes can be attributed to the described instructional features rather than external factors.

axioms (1)
  • domain assumption Self-reported confidence and relevance scales validly measure meaningful AI learning outcomes in heterogeneous adult and non-STEM populations.
    Quantitative results are presented as evidence of gains without reported validation of the instruments or controls for response bias.

pith-pipeline@v0.9.0 · 5560 in / 1272 out tokens · 32192 ms · 2026-05-14T23:27:52.094351+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

38 extracted references · 38 canonical work pages · 1 internal anchor

  1. [1]

    Ahmed, F. (2024). The digital divide and AI in education: Addressing equity and accessibility. AI EDIFY Journal, 1(2), 12–23

  2. [2]

    In 2025 IEEE Frontiers in Education Conference (FIE) (pp

    Amanuel,Y.,&Krugel,J.(2025).FromPlaytoProficiency:ASystematicLiterature Review on Learning Modalities and Transition Strategies in CS and AI Education. In 2025 IEEE Frontiers in Education Conference (FIE) (pp. 1–9). IEEE

  3. [3]

    M., Ng, A

    Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. Journal of Machine Learning Research, 3, 993–1022

  4. [4]

    Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101

  5. [5]

    Cai, C., Zhu, G., & Ma, M. (2025). A systemic review of AI for interdisciplinary learning. Education and Information Technologies, 30(7), 9641–9687

  6. [6]

    K., Ahmad, Z., Ismailov, M., & Sanusi, I

    Chiu, T. K., Ahmad, Z., Ismailov, M., & Sanusi, I. T. (2024). What are artificial intelligence literacy and competency? Computers and Education Open, 6, 100171

  7. [7]

    I., et al

    Conde-Ruiz, J. I., et al. (2024). AI and digital technology: gender gaps in higher education. CESifo Economic Studies, 70(3), 244–270

  8. [8]

    D’Ignazio, C. (2023). A Toolkit for Restorative/Transformative Data Science. MIT Press

  9. [9]

    Dunleavy, P., & Margetts, H. (2025). Data science, artificial intelligence and the third wave of digital era governance. Public Policy and Administration, 40(2), 185– 214

  10. [10]

    ExLENT.https://www.nsf.gov/funding/opportunities/ exlent-experiential-learning-emerging-novel-technologies, last accessed 2026/02/01

  11. [11]

    Faccia, A., Ridon, M., & Cavaliere, L. P. L. (2025). The AI Application for an AutomatedEducationSystem.InCorporateGovernance,Digitalization,andEnergy Transition (pp. 143–174). Apple Academic Press

  12. [12]

    V., & Croitoru, G

    Florea, N. V., & Croitoru, G. (2024). The Impact of AI-mediated Communication Strategies on Organization Performance. Journal of Self-Governance & Management Economics, 12(2)

  13. [13]

    If Time Allows

    Garrett, N., Beard, N., & Fiesler, C. (2020). More than “If Time Allows”: the role of ethics in AI education. In Proceedings of AAAI/ACM AIES (pp. 272–278)

  14. [14]

    Grootendorst, M. (2022). BERTopic: Neural topic modeling with a class-based TF-IDF procedure. arXiv:2203.05794. 2 To be disclosed after peer review. Learning AI without a STEM Background 13

  15. [15]

    R., & Khan, B

    Hasan, M. R., & Khan, B. (2023). An AI-based intervention for improving under- graduate STEM learning. PloS One, 18(7), e0288844

  16. [16]

    He, W., Zhang, B., & Zhang, J. (2024). The impact of technology on the labor market. In ICFIED 2024 (pp. 498–504). Atlantis Press

  17. [17]

    Kasworm, C. E. (2010). Adult learners in a research university. Adult Education Quarterly, 60(2), 143–160

  18. [18]

    Kim, J., et al. (2025). Designing AI-powered learning: Adult learners’ expectations. Educational Technology Research and Development, 73(6), 3397–3421

  19. [19]

    Li, H. (2023). AI in education: Bridging the divide or widening the gap? Advances in Education, Humanities and Social Science Research, 8(1), 355

  20. [20]

    T., et al

    Lim, J. T., et al. (2024). Preliminary Study on the Accessibility of Low-Code Development Platforms. In AiDAS 2024 (pp. 486–491). IEEE

  21. [21]

    Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. In Proceedings of CHI 2020 (pp. 1–16)

  22. [22]

    E., et al

    Makarius, E. E., et al. (2020). Rising with the machines: A sociotechnical frame- work for AI in the organization. Journal of Business Research, 120, 262–273

  23. [23]

    Mallik, S., & Gangopadhyay, A. (2023). Proactive and reactive engagement of AI methods for education: a review. Frontiers in Artificial Intelligence, 6, 1151391

  24. [24]

    Nelson, L. K. (2020). Computational grounded theory: A methodological frame- work. Sociological Methods & Research, 49(1), 3–42

  25. [25]

    H., et al

    Nguyen, A. H., et al. (2025). Integration of AI in STEM Education: A Systematic Review. In Proceedings of ICSCA 2025 (pp. 327–335)

  26. [26]

    Oschinski, M., Crawford, A., & Wu, M. (2024). AI and the future of workforce training. Center for Security and Emerging Technology

  27. [27]

    Osetskyi, V., et al. (2020). Artificial intelligence application in education: Financial implications and prospects. Financial and Credit Activity Problems of Theory and Practice, 2(33), 574–584

  28. [28]

    Oyetade, K., & Zuva, T. (2025). Advancing Equitable Education with Inclusive AI. Educational Process: International Journal, 14, e2025087

  29. [29]

    van der Linde, G., et al. (2025). Landscape of AI literacy in education: a narrative review. Discover Education, 4(1), 561

  30. [30]

    Poquet, O., & De Laat, M. (2021). Developing capabilities: Lifelong learning in the age of AI. British Journal of Educational Technology, 52(4), 1695–1708

  31. [31]

    Pedro, F., Subosa, M., Rivas, A., & Valverde, P. (2019). Artificial intelligence in education: Challenges and opportunities for sustainable development

  32. [32]

    Pradyutha, A. C. (2024). Cohort-based learning: Fostering collaborative education. Changing Landscape of Education

  33. [33]

    J., & Pandey, A

    Shailja, S., Williams, T. J., & Pandey, A. (2025). Self-efficacy of high school stu- dents after an AI-focused pre-college program. In ASEE 2025

  34. [34]

    Superskills

    “Superskills” defined by NSF.https://www.nsf.gov/pubs/2019/nsf19518/ nsf19518.htm

  35. [35]

    Vindigni, G. (2025). Data-Driven Disparities: How AI applications in education may perpetuate or mitigate inequality. EJSMT, 1(4), 4–54

  36. [36]

    Weerakkody, S. U. Impact of cohort mix on student ex- perience and educational outcomes.https://ieaa.org.au/ common/Uploaded\%20files/Research\%20Publications/2021/ PUB-IEAA-Impact-of-Cohort-Mix-on-Student-Experience-and-Educational-Outcomes-Research-Digest-17. pdf

  37. [37]

    Yu, W. (2025). A Conceptual Framework for AI Literacy with a Focus on Compe- tency. International Journal of Artificial Intelligence in Education, 1–18. 14 V. Kuskova et al

  38. [38]

    Zhai, X., et al. (2020). Applying machine learning in science assessment: a system- atic review. Studies in Science Education, 56(1), 111–151