pith. machine review for the scientific record. sign in

arxiv: 2604.14086 · v1 · submitted 2026-04-15 · 📊 stat.OT

Recognition: unknown

The Epidemiology of Artificial Intelligence

Bhramar Mukherjee, Emily Johnson, Harsh Parikh, Leo Hickey, Megan Ranney, Tyler McCormick

Pith reviewed 2026-05-10 11:37 UTC · model grok-4.3

classification 📊 stat.OT
keywords artificial intelligenceepidemiologyhealth determinantsAI exposureenvironmental epidemiologycausal inferencehealth equityAI governance
0
0 comments X

The pith

AI now functions as a determinant of health, and frameworks from environmental epidemiology can be used to measure its population-level effects.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper argues that AI systems shape access to health information, medical decisions, and care delivery in ways that affect entire populations. It proposes borrowing concepts from environmental epidemiology to treat AI as an exposure that can be studied for its health consequences. The authors separate ambient AI exposure, which reaches people through algorithmic curation and institutional decisions regardless of choice, from personal AI exposure through direct tool use. They note that standard experimental methods fall short for chronic population effects and illustrate the ideas with US survey data. The framework points to new requirements for study design, equity analysis, and governance of AI.

Core claim

The paper claims that AI functions as a determinant of health and that a conceptual framework adapted from environmental epidemiology can characterize its causal roles at the population level. This framework distinguishes ambient exposure through algorithmic curation and AI-mediated institutional decisions from personal exposure through voluntary use of AI tools. The authors argue that existing experimental approaches cannot capture chronic, population-scale effects and support the approach with nationally representative survey data on AI use patterns.

What carries the argument

The distinction between ambient AI exposure (algorithmic curation and institutional decisions that reach populations regardless of individual choice) and personal AI exposure (direct, volitional use of AI tools), which allows modeling AI as a causal factor in epidemiological terms.

If this is right

  • Study designs must shift from short-term experiments to methods that track chronic, population-level AI influences over time.
  • Health equity analyses need to incorporate differential exposure to AI systems across demographic groups.
  • AI governance decisions should draw on epidemiological evidence of health effects rather than technical performance alone.
  • Data collection systems will require new metrics to capture both ambient algorithmic influences and individual AI tool use.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Public health agencies could treat high-exposure AI environments similarly to polluted areas and develop targeted mitigation strategies.
  • Digital trace data from platforms and health records could serve as proxies for AI exposure in future cohort studies.
  • The framework invites comparisons between AI effects and other modern determinants such as social media or processed food environments.

Load-bearing premise

AI exposure can be meaningfully distinguished from other influences, measured reliably, and modeled as a causal factor in the same way as traditional environmental determinants.

What would settle it

Large-scale population studies that find no measurable association between quantified levels of ambient or personal AI exposure and specific health outcomes after controlling for confounders.

read the original abstract

Artificial intelligence (AI) systems increasingly shape how people access health information, make medical decisions, and receive care -- yet epidemiology lacks frameworks for measuring AI exposure or studying its health effects at the population level. Here we argue that AI now functions as a determinant of health and propose a conceptual framework, borrowed from environmental epidemiology, for studying it. We distinguish ambient AI exposure -- algorithmic curation and AI-mediated institutional decisions that affect populations regardless of individual choice -- from personal AI exposure -- direct, volitional use of AI tools. We characterize AI's possible causal roles in epidemiological models, show that existing experimental approaches are inadequate for capturing chronic, population-level effects, and illustrate these ideas with nationally representative US survey data. We discuss implications for study design, health equity, and AI governance.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 3 minor

Summary. The paper argues that AI now functions as a determinant of health and proposes a conceptual framework, adapted from environmental epidemiology, for studying its population-level effects. It distinguishes ambient AI exposure (algorithmic curation and institutional decisions affecting populations irrespective of choice) from personal AI exposure (direct, volitional use of AI tools), characterizes AI's possible causal roles in epidemiological models, argues that existing experimental approaches are inadequate for chronic population-level effects, illustrates the ideas with nationally representative US survey data, and discusses implications for study design, health equity, and AI governance.

Significance. If the framework holds, it offers a structured lens for epidemiologists to investigate AI's health impacts at scale, potentially guiding future study designs and policy. The manuscript is credited for its explicit conceptual proposal borrowing established environmental-epidemiology distinctions and for including an illustrative analysis of US survey data to ground the discussion.

major comments (2)
  1. [causal roles characterization] The section characterizing AI's causal roles does not specify measurable exposure metrics or dose-response relationships that would allow the ambient/personal distinction to be operationalized in standard epidemiological models; this is load-bearing for the claim that the borrowed framework can be directly applied.
  2. [existing experimental approaches] The argument that existing experimental approaches are inadequate for capturing chronic, population-level effects (in the relevant section) relies on general statements without citing or contrasting specific AI-related studies or RCTs, leaving the necessity of the new framework under-supported.
minor comments (3)
  1. [abstract and introduction] The abstract and introduction use 'determinant of health' without an explicit definition or reference to standard epidemiological usage of the term.
  2. [US survey data illustration] The survey illustration would benefit from a table or figure summarizing key exposure and outcome variables to clarify how it maps onto the proposed framework.
  3. [distinguishing ambient and personal exposure] Notation for exposure types (ambient vs. personal) is introduced descriptively but not formalized, which could be clarified with a simple schematic or definitions box.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive review and positive assessment of the conceptual framework and illustrative analysis. We address each major comment below and have revised the manuscript to incorporate the suggested improvements.

read point-by-point responses
  1. Referee: The section characterizing AI's causal roles does not specify measurable exposure metrics or dose-response relationships that would allow the ambient/personal distinction to be operationalized in standard epidemiological models; this is load-bearing for the claim that the borrowed framework can be directly applied.

    Authors: We agree that operational details are essential for applying the framework in practice. The manuscript is primarily a conceptual proposal that adapts distinctions from environmental epidemiology to AI; full specification of metrics and dose-response functions is intended as subsequent empirical work. To strengthen the paper, we have added a dedicated paragraph in the causal roles section that proposes initial measurable proxies (e.g., platform-level algorithmic exposure frequency derived from usage logs for ambient exposure, and validated self-report scales for hours of direct AI tool interaction for personal exposure) and sketches how these could be incorporated into time-varying exposure models to examine dose-response relationships in longitudinal cohorts. This addition provides a concrete bridge to operationalization while preserving the paper's scope as a framework paper. revision: yes

  2. Referee: The argument that existing experimental approaches are inadequate for capturing chronic, population-level effects (in the relevant section) relies on general statements without citing or contrasting specific AI-related studies or RCTs, leaving the necessity of the new framework under-supported.

    Authors: The referee correctly identifies that the discussion would benefit from concrete examples. We have revised the section on experimental limitations to include targeted citations and contrasts: for instance, we now reference recent RCTs of AI chatbots for mental health support (short-term symptom reduction but no population-level chronic exposure assessment) and observational studies of clinical decision-support algorithms (individual-level efficacy trials with limited follow-up on downstream population effects). These examples are contrasted with the chronic, ambient exposure patterns that standard RCTs are not designed to capture, thereby providing stronger empirical grounding for why complementary population-level designs are needed. revision: yes

Circularity Check

0 steps flagged

No significant circularity in conceptual framework proposal

full rationale

The paper is explicitly a conceptual proposal that borrows an analogy from environmental epidemiology to frame AI as a health determinant. It contains no equations, derivations, fitted parameters, or quantitative predictions. The central steps—distinguishing ambient vs. personal exposure, sketching causal roles, and noting limits of experiments—are definitional and illustrative rather than reductions of any claim to its own inputs. No load-bearing self-citations or ansatzes appear; the work is self-contained as a framework sketch and does not claim to derive new results from prior fitted values or uniqueness theorems.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on treating AI as analogous to environmental exposures without new empirical grounding or operational metrics.

axioms (1)
  • domain assumption AI exposure can be categorized and studied causally using environmental epidemiology models
    Invoked when borrowing the framework and distinguishing ambient/personal types.

pith-pipeline@v0.9.0 · 5431 in / 988 out tokens · 34682 ms · 2026-05-10T11:37:20.753647+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

17 extracted references · 6 canonical work pages

  1. [1]

    Published February 2025; finds two-thirds of physicians used health AI in

  2. [2]

    102802.124329

    doi: 10.1146/annurev.publhealth.25. 102802.124329. Ruha Benjamin.Race After Technology: Abolitionist Tools for the New Jim Code. Polity,

  3. [3]

    The Epidemiology of Artificial Intelligence 15 Sindhu Kiranmai Ernala, Moira Burke, Alex Leavitt, and Nicole B. Ellison. How well do people report time spent on Facebook? an evaluation of established survey questions with recommen- dations. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1–14. ACM,

  4. [4]

    Liu, Valdemar Danry, Eunhae Lee, Samantha W

    Cathy Mengying Fang, Auren R Liu, Valdemar Danry, Eunhae Lee, Samantha WT Chan, Pat Pataranutaporn, Pattie Maes, et al. How AI and human behaviors shape psychosocial effects of chatbot use: A longitudinal randomized controlled study.arXiv preprint arXiv:2503.17473,

  5. [5]

    Filed October 22, 2024; settled January

  6. [6]

    How Google is using AI to improve health for everyone

    Google. How Google is using AI to improve health for everyone. Google Blog: The Check Up 2026, March

  7. [7]

    Elham Hatef, Jonathan P Weiner, et al

    Available at https://blog.google/innovation-and-ai/technology/ health/google-check-up-health-ai-updates-2026/. Elham Hatef, Jonathan P Weiner, et al. Digital health care equity framework (DHEF): a collaborative approach to address health care inequities.JAMIA Open,

  8. [8]

    R., Jurafsky, D., and King, S

    Valentin Hofmann, Pratyusha Ria Kalluri, Dan Jurafsky, and Sharese King. Dialect prejudice predicts AI decisions about people’s character, employability, and criminality.arXiv preprint arXiv:2403.00742,

  9. [9]

    Co-writing with opinionated language models affects users’ views

    Maurice Jakesch, Advait Bhat, Daniel Buschek, Lior Zalmanson, and Mor Naaman. Co-writing with opinionated language models affects users’ views. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pages 1–15,

  10. [10]

    Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task

    Nataliya Kosmyna, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, Xian-Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein, and Pattie Maes. Your brain on chatgpt: Accumulation of cognitive debt when using an ai assistant for essay writing task.arXiv preprint arXiv:2506.08872,

  11. [11]

    Understanding teen overreliance on AI companion chatbots.arXiv preprint arXiv:2507.15783,

    Mina Namvarpour et al. Understanding teen overreliance on AI companion chatbots.arXiv preprint arXiv:2507.15783,

  12. [12]

    doi: 10.1126/science.aax2342. OpenAI. Introducing ChatGPT Health. OpenAI blog, January

  13. [13]

    Pew Research Center

    Available at https://www.pewresearch.org/. Pew Research Center. Teens, social media and AI chatbots 2025.Pew Research Center: Internet & Technology, December

  14. [14]

    Available at: https://www.pewresearch.org/internet/2025/12/ 09/teens-social-media-and-ai-chatbots-2025/. James M. Robins, Miguel A. Hernán, and Babette Brumback. Marginal structural models and causal inference in epidemiology.Epidemiology, 11(5):550–560,

  15. [15]

    About a quarter of U.S

    Olivia Sidoti and Eugenie Park. About a quarter of U.S. teens have used ChatGPT for schoolwork, double the share in 2023.Pew Research Center, January

  16. [16]

    Jiawei Zhou, Yixuan Zhang, Qianni Luo, Andrea G Parker, and Munmun De Choudhury

    Explicitly notes MAP data serves to define eligibility criteria for Global Fund finance. Jiawei Zhou, Yixuan Zhang, Qianni Luo, Andrea G Parker, and Munmun De Choudhury. Synthetic lies: Understanding AI-generated misinformation and evaluating algorithmic and human so- lutions. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems. ACM,

  17. [17]

    Poverty mapping: Innovative approaches to creating poverty maps with new data sources

    Virginia Ziulu, Jessica Meckler, Gonzalo Hernández Licona, and Jozef Vaessen. Poverty mapping: Innovative approaches to creating poverty maps with new data sources. Ieg methods and evaluation capacity development working paper series, Independent Evaluation Group, World Bank Group, 2022