pith. machine review for the scientific record. sign in

arxiv: 2605.12757 · v1 · submitted 2026-05-12 · ⚛️ physics.ed-ph · cs.CY

Recognition: 2 theorem links

· Lean Theorem

A Framework for institutional change in the age of AI

David Perl-Nussbaum, Noah D. Finkelstein

Authors on Pith no claims yet

Pith reviewed 2026-05-14 19:53 UTC · model grok-4.3

classification ⚛️ physics.ed-ph cs.CY
keywords institutional changegenerative AISTEM educationphysics educationeducational transformationchange modelsuncertainty
0
0 comments X

The pith

Generative AI requires a new framework for institutional change in STEM education because it enters classrooms before evidence can accumulate.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper argues that traditional models for transforming STEM teaching assume stable practices with established evidence that can be evaluated and scaled across institutions. Generative AI violates this pattern as an arrival technology that reaches classrooms rapidly without a prior pedagogical research base. It identifies six dimensions where prior assumptions break down: three about the tools themselves (evidence base, rate of change, and scope) and three about the people involved (faculty, change agents, and students). For each dimension the authors derive concrete design implications, such as privileging humble local inquiries, organizing reform around pedagogical approaches rather than specific tools, repositioning change agents as facilitators, and treating students as partners. A brief case study of a faculty workshop series in a university physics department shows how the framework can guide adaptation under genuine uncertainty.

Core claim

Collectively, the six dimensions and their design implications form a new framework for adapting institutional change models to support STEM departments when conditions of genuine uncertainty prevail, as they do with generative AI.

What carries the argument

The six-dimension framework that separates tool-related properties (evidence base, rate of change, scope) from people-related properties (faculty, change agents, students) and translates each into revised design implications for reform.

If this is right

  • Reform efforts shift from scaling proven tools to supporting humble, local inquiries that evolve with the technology.
  • Change initiatives organize around broad pedagogical approaches rather than adoption of any single AI product.
  • Change agents move from expert disseminators to facilitators who help groups conduct collective inquiry.
  • Students are included as active partners in shaping how AI is used in their courses.
  • Institutions plan for ongoing uncertainty instead of waiting for definitive evidence before acting.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same six dimensions could be tested in non-STEM fields facing rapid technological arrivals, such as humanities or professional training programs.
  • Longitudinal tracking of departments that apply the framework versus those that do not could reveal measurable differences in instructor adaptation speed and student outcomes.
  • The framework may connect to existing organizational theories of high-uncertainty environments, suggesting extensions beyond education.

Load-bearing premise

The assumption that generative AI behaves like earlier educational technologies whose evidence base forms before widespread classroom use.

What would settle it

A multi-year study across several institutions that demonstrates a stable, large-scale evidence base for specific AI pedagogical practices forming quickly enough to allow traditional adoption-and-scale models to succeed without the proposed adjustments.

Figures

Figures reproduced from arXiv: 2605.12757 by David Perl-Nussbaum, Noah D. Finkelstein.

Figure 1
Figure 1. Figure 1: provides an overview of the framework. The first three dimensions of our framework concern the tools - educational innovations at the center of change: their evidence base, rate of change, and scope. The other three concern people - the actors involved in the change process: faculty agency, the role of change agents, and the role of students. For each dimension we describe, we suggest design implications -… view at source ↗
Figure 2
Figure 2. Figure 2: The full framework. For each of the six dimensions, the figure summarizes (a) the assumption embedded [PITH_FULL_IMAGE:figures/full_fig_p014_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Positioning of traditional IE institutional change (blue) and AI- [PITH_FULL_IMAGE:figures/full_fig_p017_3.png] view at source ↗
read the original abstract

Generative AI is rapidly reshaping STEM higher education. Not only are our educational practices changing, but how we think about educational transformation must adapt. Existing models of institutional change in STEM, aimed at interactive engagement, have largely followed an adoption logic: relatively stable, well-researched educational practices are evaluated and then scaled. These assumptions do not hold for generative AI, which is an arrival technology -- entering classrooms before a sufficient pedagogical evidence base could form. Building on recent decades of work on STEM institutional change, we propose a framework identifying six dimensions along which prior change models must be reconsidered in light of AI: three concerning the tools at the center of reform (the tool's evidence base, rate of change, and scope), and three concerning the people involved in change (faculty, change agents, and students). For each dimension, we examine how AI-era assumptions differ from those underlying prior interactive engagement reforms and derive design implications, including: privileging humble and local inquiries; organizing reform around pedagogical approaches rather than specific tools; repositioning change agents as facilitators of collective inquiry; and engaging students as partners in reform. Collectively, the six dimensions and design implications constitute a new framework for adapting change models to support institutions under conditions of genuine uncertainty. Finally, we illustrate how the framework may be applied through a brief case-study of a faculty workshop series carried out in a university physics department to support instructors adapting to this modern AI era.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The manuscript proposes a six-dimensional framework for adapting institutional change models in STEM higher education to generative AI. It contrasts prior interactive-engagement reforms, which assume stable, evidence-based practices that can be evaluated and scaled, with AI as an 'arrival technology' that enters classrooms before sufficient pedagogical evidence forms. The dimensions cover tools (evidence base, rate of change, scope) and people (faculty, change agents, students); for each, differing assumptions are examined and design implications derived, such as privileging humble local inquiries, organizing around pedagogical approaches rather than specific tools, repositioning change agents as facilitators of collective inquiry, and engaging students as partners. The framework is illustrated via a brief case study of a physics-department faculty workshop series.

Significance. If the framework holds, it supplies a structured conceptual adaptation of existing STEM change literature for conditions of genuine uncertainty created by rapidly evolving AI. The explicit contrast with prior models and the derived implications (e.g., student partnership and focus on pedagogical approaches over tools) offer practical guidance for institutions. The work builds logically on decades of STEM reform scholarship without introducing self-referential parameters or fitted data.

major comments (1)
  1. Case-study section: the description of the physics-department workshop series is limited to a brief illustration with no specifics on activities, how individual dimensions were applied, participant feedback, or observable changes in practice. This weakens the claim that the six dimensions and implications 'constitute a new framework' by leaving readers without concrete evidence of operationalization.
minor comments (2)
  1. Abstract and framework sections: the term 'arrival technology' is introduced without a formal definition or citation to prior usage; adding one would clarify the central premise.
  2. Framework presentation: while the six dimensions are named, a summary table contrasting prior assumptions with AI-era assumptions for each dimension would improve readability and make the logical differences more explicit.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their constructive and positive assessment of the manuscript. We address the single major comment below.

read point-by-point responses
  1. Referee: Case-study section: the description of the physics-department workshop series is limited to a brief illustration with no specifics on activities, how individual dimensions were applied, participant feedback, or observable changes in practice. This weakens the claim that the six dimensions and implications 'constitute a new framework' by leaving readers without concrete evidence of operationalization.

    Authors: We agree that the case-study section is brief and functions primarily as an illustration rather than a detailed empirical report. The manuscript's core contribution is the six-dimensional conceptual framework derived from adapting prior STEM change literature to conditions of rapid technological uncertainty; the workshop example is included only to show one possible application. To strengthen the manuscript and better demonstrate operationalization, we will expand this section in revision to include additional specifics on workshop activities, explicit connections to each of the six dimensions, and any available participant feedback or observed shifts in practice. This expansion will remain consistent with the illustrative intent while addressing the concern about concreteness. revision: yes

Circularity Check

0 steps flagged

No significant circularity in the proposed framework

full rationale

The paper derives its six-dimension framework (three tool dimensions: evidence base, rate of change, scope; three people dimensions: faculty, change agents, students) and associated design implications through explicit logical contrast with the adoption assumptions of prior interactive-engagement STEM change models. No step reduces by construction to self-definition, fitted parameters, or a self-citation chain; the central claim is presented as an adaptation of external literature rather than a renaming or internal reduction. The brief case study is labeled illustrative only. The argument remains self-contained against external benchmarks in institutional change theory.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 1 invented entities

The paper assumes standard domain knowledge from STEM education change literature and posits new dimensions without new empirical data or external benchmarks.

axioms (2)
  • domain assumption Prior models of institutional change assume relatively stable, well-researched educational practices that can be evaluated and scaled.
    Stated directly in the abstract as the basis for why AI differs.
  • domain assumption Generative AI is an arrival technology entering classrooms before a sufficient pedagogical evidence base could form.
    Central premise used to justify the need for a new framework.
invented entities (1)
  • Six-dimension framework for AI-era institutional change no independent evidence
    purpose: To identify areas where prior models must be reconsidered and derive new design implications
    The framework itself is the proposed construct introduced without independent empirical validation in the abstract.

pith-pipeline@v0.9.0 · 5553 in / 1357 out tokens · 78934 ms · 2026-05-14T19:53:50.109610+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

Reference graph

Works this paper leans on

9 extracted references · 9 canonical work pages

  1. [1]

    Association of American Colleges and Universities. (2014). Achieving systemic change: A sourcebook for advancing and funding undergraduate STEM education. Abdurrahman, F. N., Turpen, C., & Sachmpazidi, D. (2022). A case study of cultural change: Learning to partner with students. In 2022 Physics Education Research Conference Proceedings. (pp. 24–29). Acce...

  2. [2]

    G., & Deal, T

    Bolman, L. G., & Deal, T. E. (2017). Reframing organizations: Artistry, choice, and leadership (6th ed.). Jossey- Bass. Bond, M., Khosravi, H., De Laat, M., Bergdahl, N., Negrea, V., Oxley, E., ... & Siemens, G. (2024). A meta systematic review of artificial intelligence in higher education: A call for increased ethics, collaboration, and rigour. Internat...

  3. [3]

    Borrego, M., & Henderson, C. (2014). Increasing the use of evidence‐based teaching in STEM higher education: A comparison of eight change strategies. Journal of Engineering Education, 103(2), 220-252. Bresnahan, T. F., & Trajtenberg, M. (1995). General purpose technologies ‘Engines of growth’? Journal of Econometrics, 65(1), 83-108. 20 Calvino, F., Haerle...

  4. [4]

    https://www.digitaleducationcouncil.com/post/what-students-want-keyresults-from-dec-global-ai-student-survey- 2024 Engeström, Y. (1999). Activity theory and individual. Perspectives on activity theory,

  5. [5]

    Finkelstein, N. D. (2025). A principled way to think about AI in education: Guidance for action based on goals, models of human learning, and use of technologies. arXiv preprint arXiv:2510.01467. 21 Giannakos, M., Azevedo, R., Brusilovsky, P., Cukurova, M., Dimitriadis, Y., Hernandez -Leo, D., Järvelä, S., Mavrikis, M., & Rienties, B. (2025). The promise ...

  6. [6]

    C., & Shaffer, P

    https://doi.org/10.1119/1.19265 McDermott, L. C., & Shaffer, P. S. (2002). Tutorials in introductory physics (Vol. 2). Prentice Hall. 22 Messeri, L., & Crockett, M. J. (2024). Artificial intelligence and illusions of understanding in scientific research. Nature, 627(8002), 49-58. Michigan State University. (2026, January 21). MSU leads talent development ...

  7. [7]

    L., Pilgrim, M

    Reinholz, D. L., Pilgrim, M. E., Corbo, J. C., & Finkelstein, N. (2019). Transforming undergraduate education from the middle out with departmental action teams. Change: The magazine of higher learning, 51(5), 64-70. Reinholz, D. L., White, I., & Andrews, T. (2021). Change theory in STEM higher education: A systematic review. International Journal of STEM...

  8. [8]

    Ripley, D., Arthars, N., Khosronejad, M., & Markauskaite, L. (2024). Co -designing for learning across disciplines: Bringing students' perspectives into design principles via relational design. In Creating Design Knowledge in Educational Innovation (pp. 178-192). Routledge. Sajadieh, S., Fattorini, L., Perrault, R., Gil, Y., Parli, V., Santarlasci, L., Pa...

  9. [9]

    M., Bai, J

    Xiao, J., Bozkurt, A., Nichols, M., Pazurek, A., Stracke, C. M., Bai, J. Y., ... & Themeli, C. (2025). Venturing into the unknown: Critical insights into grey areas and pioneering future directions in educational generative AI research. TechTrends, 69(3), 582-597. Xiao, P., Chen, Y., & Bao, W. (2023). Waiting, banning, and embracing: An empirical analysis...