pith. machine review for the scientific record. sign in

arxiv: 2605.08395 · v1 · submitted 2026-05-08 · 📊 stat.ME · stat.AP

Recognition: 1 theorem link

· Lean Theorem

Statistical Design of Pragmatic Trials Using Electronic Health Record Data when Outcome Assessments are Uncontrolled and Irregular

Authors on Pith no claims yet

Pith reviewed 2026-05-12 01:26 UTC · model grok-4.3

classification 📊 stat.ME stat.AP
keywords pragmatic trialselectronic health recordsuncontrolled assessmentsintervention-dependent measurementsimulation studylinear mixed modelstreatment effect biaslongitudinal data analysis
0
0 comments X

The pith

In pragmatic trials using electronic health record data, models that flexibly adjust for irregular assessment timing produce unbiased treatment effect estimates even when the intervention influences how often outcomes are measured.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Pragmatic trials increasingly measure outcomes through routine electronic health records rather than fixed study visits, resulting in sparse and irregular data. When the treatment itself changes the frequency or timing of these assessments, standard analytic shortcuts introduce bias into estimates of effectiveness. Simulations built from pre-trial cohort information demonstrate that simple methods like selecting a single best score or random observation distort results substantially. Longitudinal models that incorporate time since baseline and allow flexible correlation structures recover accurate time-specific or averaged treatment effects. The work informed the primary analysis for one ongoing trial and outlines a replicable process for choosing methods in similar settings.

Core claim

Under intervention-dependent assessments, naive methods such as using the best score or a randomly selected score without adjusting for measurement timing produced substantial bias, while models that adjusted flexibly for follow-up timing estimated time-point specific or time-averaged treatment effects without bias. Among unbiased approaches, a linear mixed model with exponential correlation structure, adjustment for time since baseline, and a time-varying intervention effect was the most powerful for estimating the effect at the end of the intervention window.

What carries the argument

A simulation study that combines pre-trial cohort estimates of assessment frequency and timing with assumptions about intervention effects on measurement patterns to generate realistic sparse data and benchmark analytic methods.

If this is right

  • Flexible adjustment for time since baseline combined with time-varying intervention effects yields unbiased estimates at the end of the intervention period.
  • Among methods that avoid bias, the linear mixed model with exponential correlation structure provides the greatest statistical power.
  • Pre-trial data can be used to simulate trial-specific assessment patterns and thereby select an appropriate primary analytic method.
  • Trials relying on uncontrolled assessments should routinely evaluate the risk of intervention-dependent measurement and choose methods accordingly.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same simulation framework could be applied to other real-world data sources where measurement frequency may correlate with patient status or treatment.
  • Pre-specifying sensitivity analyses across a range of assumed intervention-assessment dependence strengths would add robustness to trial conclusions.
  • The recommended modeling approach might be extended to examine whether treatment effects differ across subgroups defined by their assessment patterns.

Load-bearing premise

The simulation depends on assumptions about how the intervention alters assessment frequency and timing, which are then combined with pre-trial cohort estimates to represent the trial's data-generating process.

What would settle it

Re-running the recommended linear mixed model on a dataset where assessment timing is documented to be unaffected by treatment and confirming that bias remains would indicate the adjustment does not reliably remove bias.

Figures

Figures reproduced from arXiv: 2605.08395 by Jennifer F. Bobb, Katharine A. Bradley, Lynn L. DeBar, Melissa L. Anderson, Noorie Hyun, Sungtaek Son.

Figure 1
Figure 1. Figure 1: Title: Frequency and timing of PHQ-9 outcomes from retrospective cohort data prior to the MI-CARE trial Legend: Summaries shown among individuals with at least 1 follow-up measure. In Panel (B) the percentages (%) at the bottom of the plot indicate the % of individuals who have at least one follow-up measure within the given month (note %s do not sum to 100 since individuals can have measures in multiple m… view at source ↗
read the original abstract

Pragmatic trials increasingly define outcomes using real-world data such as electronic health records, where assessments are collected during routine care rather than at fixed timepoints. Consequently, these uncontrolled assessments may be irregular, sparse, and affected by the intervention (intervention-dependent assessments), which can lead to biased treatment effect estimates. We developed a simulation study to inform the statistical approach for trials with uncontrolled assessments, which we applied to the MI-CARE pragmatic trial. Using a pre-trial cohort mimicking eligibility and outcome measurement, we estimated assessment frequency and timing and combined these estimates with assumptions about how the intervention effects might impact assessment. We simulated sparse and intervention-dependent assessments and compared single-measure approaches with longitudinal models using all scores. Under intervention-dependent assessments, we found that naive methods such as using the best score or using a randomly selected score without adjusting for measurement timing produced substantial bias. Models that adjusted flexibly for the follow-up timing estimated time-point specific or time-averaged treatment effects without bias. Simulation results informed the selection of the statistical approach for the MI-CARE trial. Among unbiased methods, the most powerful was a linear mixed model with exponential correlation structure, adjustment for time since baseline, and a time-varying intervention effect to estimate the intervention effect at the end of the intervention window. Future studies can use pre-trial data to conduct a simulation study tailored to the trial's data features to inform the analytic approach. Trials with uncontrolled assessments should consider the potential for intervention-dependent assessments and select an appropriate method to avoid bias.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper develops a simulation framework using pre-trial EHR cohort data to estimate assessment frequency/timing, then overlays assumptions about how an intervention alters those patterns to generate sparse, intervention-dependent outcome data. It compares naive single-score methods (best score, random score) against longitudinal models that flexibly adjust for follow-up time, finds substantial bias in the former and none in the latter under the simulated conditions, and uses the results to select a linear mixed model with exponential correlation, time-since-baseline adjustment, and time-varying treatment effect for the MI-CARE pragmatic trial.

Significance. If the simulation assumptions match the actual trial data-generating process, the work supplies a practical, pre-trial-data-driven procedure for choosing unbiased estimators in pragmatic trials with uncontrolled assessments. The emphasis on intervention-dependent assessment as a distinct source of bias, together with the use of real pre-trial data to tailor the simulation, is a constructive contribution to statistical design for EHR-based trials.

major comments (2)
  1. [Simulation design] Simulation design (Methods and Results sections): the intervention-dependent assessment patterns are generated by combining pre-trial frequency/timing estimates with explicit functional assumptions on how the intervention changes visit rates and timing; no sensitivity analyses or alternative functional forms are reported. Because all bias comparisons and the subsequent model selection for MI-CARE rest on these assumptions, deviations in the true dependence structure would invalidate the reported superiority of the linear mixed model.
  2. [Application to MI-CARE] Application to MI-CARE (Discussion and analytic-plan section): the paper recommends the linear mixed model with exponential correlation and time-varying effect on the basis of simulation performance under the chosen assumptions. Without either (a) external validation of the assumptions against MI-CARE pilot data or (b) a pre-specified robustness check that re-runs the simulation under plausible alternative dependence structures, the recommendation remains conditional and potentially non-transportable to the actual trial.
minor comments (2)
  1. [Abstract] The abstract and simulation description omit key operating characteristics (number of Monte Carlo replicates, sample size per arm, exact parameter values for the pre-trial estimates and intervention-effect multipliers).
  2. [Statistical methods] Notation for the time-varying treatment effect and the exponential correlation structure should be defined explicitly with reference to the model equation used in the MI-CARE analysis.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive comments and positive evaluation of the significance of our work. We address each of the major comments below, providing clarifications and indicating revisions to the manuscript where appropriate.

read point-by-point responses
  1. Referee: [Simulation design] Simulation design (Methods and Results sections): the intervention-dependent assessment patterns are generated by combining pre-trial frequency/timing estimates with explicit functional assumptions on how the intervention changes visit rates and timing; no sensitivity analyses or alternative functional forms are reported. Because all bias comparisons and the subsequent model selection for MI-CARE rest on these assumptions, deviations in the true dependence structure would invalidate the reported superiority of the linear mixed model.

    Authors: We agree that the simulation results are conditional on the specific functional assumptions used to model the intervention's effect on assessment patterns. These assumptions were derived from discussions with the MI-CARE trial investigators regarding plausible mechanisms by which the intervention might influence visit frequency and timing. To strengthen the manuscript, we will add sensitivity analyses in the revised Methods and Results sections. Specifically, we will report results under alternative assumptions, such as multiplicative changes in visit rates and shifts in timing distributions, to demonstrate that the unbiased performance of the longitudinal models holds under these variations. This will mitigate concerns about the robustness of the model selection. revision: yes

  2. Referee: [Application to MI-CARE] Application to MI-CARE (Discussion and analytic-plan section): the paper recommends the linear mixed model with exponential correlation and time-varying effect on the basis of simulation performance under the chosen assumptions. Without either (a) external validation of the assumptions against MI-CARE pilot data or (b) a pre-specified robustness check that re-runs the simulation under plausible alternative dependence structures, the recommendation remains conditional and potentially non-transportable to the actual trial.

    Authors: We acknowledge that the recommendation is based on the simulation under the primary assumptions and that this limits its direct transportability without further checks. In the revised manuscript, we will expand the Discussion to explicitly state the assumptions and their potential impact. Furthermore, we will incorporate a pre-specified robustness check into the analytic plan for the MI-CARE trial, outlining that if pilot data or interim analyses suggest different dependence structures, the simulation will be re-run with alternative forms to confirm the choice of model. We believe this addresses the concern by making the approach more adaptable, while noting that full external validation would require access to pilot data which is not yet available for this analysis. revision: partial

Circularity Check

0 steps flagged

No significant circularity; simulation uses independent pre-trial cohort estimates plus explicit external assumptions

full rationale

The paper estimates assessment frequency and timing from a pre-trial cohort that mimics eligibility criteria, then overlays separate assumptions about how the intervention alters those patterns to generate simulated data. It compares analytic methods under those simulations and selects the linear mixed model for the MI-CARE trial. This is forward simulation and method evaluation, not a closed loop where any reported bias result or model choice reduces to a fitted parameter or self-definition by construction. No self-citations, uniqueness theorems, or ansatzes imported from prior author work appear as load-bearing steps. The derivation chain remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

2 free parameters · 1 axioms · 0 invented entities

Central claim depends on the representativeness of the pre-trial cohort for future trial assessment patterns and on the validity of the chosen assumptions linking intervention to assessment behavior; no new entities are postulated.

free parameters (2)
  • pre-trial assessment frequency and timing estimates
    Derived from pre-trial cohort mimicking eligibility and outcome measurement to set simulation parameters.
  • intervention impact assumptions on assessments
    Combined with pre-trial estimates to generate intervention-dependent assessment scenarios.
axioms (1)
  • domain assumption Pre-trial cohort data accurately reflects the assessment patterns expected in the actual trial population.
    Invoked when using the cohort to estimate frequency and timing for simulations.

pith-pipeline@v0.9.0 · 5593 in / 1312 out tokens · 45889 ms · 2026-05-12T01:26:58.831702+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

  • IndisputableMonolith/Foundation/RealityFromDistinction.lean reality_from_one_distinction unclear
    ?
    unclear

    Relation between the paper passage and the cited Recognition theorem.

    We developed a simulation study framework to inform the choice of statistical method for trials with uncontrolled, potentially intervention-dependent assessments... Models that adjusted flexibly for the follow-up timing estimated time-point specific or time-averaged treatment effects without bias.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

40 extracted references · 40 canonical work pages

  1. [1]

    Why are so few randomized trials useful, and what can we do about it? J Clin Epidemiol

    Zwarenstein M, Oxman A. Why are so few randomized trials useful, and what can we do about it? J Clin Epidemiol. 2006 Nov;59(11):1125–6. doi:10.1016/j.jclinepi.2006.05.010

  2. [2]

    The role for pragmatic randomized controlled trials (pRCTs) in comparative effectiveness research

    Chalkidou K, Tunis S, Whicher D, Fowler R, Zwarenstein M. The role for pragmatic randomized controlled trials (pRCTs) in comparative effectiveness research. Clin Trials. 2012 Aug;9(4):436–46

  3. [3]

    A PRagmatic-Explanatory Continuum Indicator Summary (PRECIS): A tool to help trial designers

    Thorpe KE, Zwarenstein M, Oxman AD, Treweek S, Furberg CD, Altman DG, et al. A PRagmatic-Explanatory Continuum Indicator Summary (PRECIS): A tool to help trial designers. J Clin Epidemiol. 2009 May;62(5):464–75

  4. [4]

    The PRECIS-2 tool: designing trials that are fit for purpose

    Loudon K, Treweek S, Sullivan F, Donnan P, Thorpe KE, Zwarenstein M. The PRECIS-2 tool: designing trials that are fit for purpose. BMJ. 2015 May 8;350:h2147. doi:10.1136/bmj.h2147 PubMed PMID: 25956159

  5. [5]

    [cited 2024 Jul 23]

    Rethinking Clinical Trials [Internet]. [cited 2024 Jul 23]. NIH Collaboratory Rethinking Clinical Trials - The Living Textbook. Available from: https://rethinkingclinicaltrials.org/

  6. [6]

    Challenges and Opportunities for Using Big Health Care Data to Advance Medical Science and Public Health

    Shortreed SM, Cook AJ, Coley RY, Bobb JF, Nelson JC. Challenges and Opportunities for Using Big Health Care Data to Advance Medical Science and Public Health. Am J Epidemiol. 2019 May 1;188(5):851–61

  7. [7]

    SPIRIT 2013 explanation and elaboration: Guidance for protocols of clinical trials

    Chan AW, Tetzlaff JM, Gotzsche PC, Altman DG, Mann H, Berlin JA, et al. SPIRIT 2013 explanation and elaboration: Guidance for protocols of clinical trials. BMJ. 2013 Jan 8;346:e7586. PubMed Central PMCID: PMC3541470

  8. [8]

    Analysis of longitudinal data

    Diggle P, Heagerty P, Liang KY, Zeger S. Analysis of longitudinal data. Second. Oxford; New York: Oxford University Press; 2002

  9. [9]

    Randomized Trials With Repeatedly Measured Outcomes: Handling Irregular and Potentially Informative Assessment Times

    Pullenayegum EM, Scharfstein DO. Randomized Trials With Repeatedly Measured Outcomes: Handling Irregular and Potentially Informative Assessment Times. Epidemiol Rev. 2022 Dec 21;44(1):121–37. PubMed Central PMCID: PMC10362939

  10. [10]

    The estimands framework: A primer on the ICH E9(R1) addendum

    Kahan BC, Hindley J, Edwards M, Cro S, Morris TP. The estimands framework: A primer on the ICH E9(R1) addendum. BMJ. 2024 Jan 23;384. PubMed Central PMCID: PMC10802140

  11. [11]

    ICH E9 (R1) addendum on estimands and sensitivity analysis in clinical trials to the guideline on statistical principles for clinical trials

    Committee for Medicinal Products for Human Use. ICH E9 (R1) addendum on estimands and sensitivity analysis in clinical trials to the guideline on statistical principles for clinical trials. European Medicines Agency; 2020 Feb. p. 19

  12. [12]

    Joint analysis of longitudinal data and recurrent episodes data with application to medical cost analysis

    Zhu L, Zhao H, Sun J, Pounds S, Zhang H. Joint analysis of longitudinal data and recurrent episodes data with application to medical cost analysis. Biom J. 2013 Jan;55(1):5–16

  13. [13]

    Longitudinal data subject to irregular observation: A review of methods with a focus on visit processes, assumptions, and study design

    Pullenayegum EM, Lim LS. Longitudinal data subject to irregular observation: A review of methods with a focus on visit processes, assumptions, and study design. Stat Methods Med Res. 2016 Dec;25(6):2992–3014

  14. [14]

    Causal inference with longitudinal data subject to irregular assessment times

    Pullenayegum EM, Birken C, Maguire J. Causal inference with longitudinal data subject to irregular assessment times. Stat Med. 2023 Jun 30;42(14):2361–93. Page 16 of 21

  15. [15]

    Multiple outputation for the analysis of longitudinal data subject to irregular observation

    Pullenayegum EM. Multiple outputation for the analysis of longitudinal data subject to irregular observation. Stat Med. 2016 May 20;35(11):1800–18

  16. [16]

    Analysis of longitudinal data from outcome-dependent visit processes: Failure of proposed methods in realistic settings and potential improvements

    Neuhaus JM, McCulloch CE, Boylan RD. Analysis of longitudinal data from outcome-dependent visit processes: Failure of proposed methods in realistic settings and potential improvements. Stat Med. 2018 Dec 20;37(29):4457–71

  17. [17]

    Semiparametric regression analysis of longitudinal data with informative drop-outs

    Lin DY, Ying Z. Semiparametric regression analysis of longitudinal data with informative drop-outs. Biostatistics. 2003 Jul;4(3):385–98

  18. [18]

    Joint modeling and analysis of longitudinal data with informative observation times

    Liang Y, Lu W, Ying Z. Joint modeling and analysis of longitudinal data with informative observation times. Biometrics. 2009 Jun;65(2):377–84

  19. [19]

    Mixed-effects models for health care longitudinal data with an informative visiting process: A Monte Carlo simulation study

    Gasparini A, Abrams KR, Barrett JK, Major RW, Sweeting MJ, Brunskill NJ, et al. Mixed-effects models for health care longitudinal data with an informative visiting process: A Monte Carlo simulation study. Stat Neerl. 2020 Feb;74(1):5–23. PubMed Central PMCID: PMC6919310

  20. [20]

    Multiple Outputation: Inference for Complex Clustered Data by Averaging Analyses from Independent Data

    Follmann D, Proschan M, Leifer E. Multiple Outputation: Inference for Complex Clustered Data by Averaging Analyses from Independent Data. Biometrics. 2003;59(2):420–9

  21. [21]

    A joint modeling approach to data with informative cluster size: Robustness to the cluster size model

    Chen Z, Zhang B, Albert PS. A joint modeling approach to data with informative cluster size: Robustness to the cluster size model. Stat Med. 2011 Jul 10;30(15):1825–36. PubMed Central PMCID: PMC3115426

  22. [22]

    Semiparametric modeling of repeated measurements under outcome-dependent follow-up

    Buzkova P, Lumley T. Semiparametric modeling of repeated measurements under outcome-dependent follow-up. Stat Med. 2009 Mar 15;28(6):987–1003

  23. [23]

    Semiparametric and Nonparametric Regression Analysis of Longitudinal Data

    Lin DY, Ying Z. Semiparametric and Nonparametric Regression Analysis of Longitudinal Data. J Am Stat Assoc. 2001 Mar 1;96(453):103–26

  24. [24]

    Analysis of Longitudinal Data with Irregular, Outcome-Dependent Follow-Up

    Lin H, Scharfstein DO, Rosenheck RA. Analysis of Longitudinal Data with Irregular, Outcome-Dependent Follow-Up. J R Stat Soc Series B Stat Methodol. 2004;66(3):791–813

  25. [25]

    Regression Analysis of Longitudinal Data with Time-Dependent Covariates and Informative Observation Times

    Song X, Mu X, Sun L. Regression Analysis of Longitudinal Data with Time-Dependent Covariates and Informative Observation Times. Scand Stat Theory Appl. 2012;39(2):248–58

  26. [26]

    Semiparametric analysis of longitudinal data with informative observation times

    Sun L quan, Mu X yun, Sun Z hua, Tong X wei. Semiparametric analysis of longitudinal data with informative observation times. Acta Math Appl Sin. 2011 Jan 1;27(1):29–42

  27. [27]

    Joint Analysis of Longitudinal Data With Informative Observation Times and a Dependent Terminal Event

    Sun L, Song X, Zhou J, Liu L. Joint Analysis of Longitudinal Data With Informative Observation Times and a Dependent Terminal Event. J Am Stat Assoc. 2012 Jun 1;107(498):688–700

  28. [28]

    Tools to implement measurement-based care (MBC) in the treatment of opioid use disorder (OUD): Toward a consensus

    Rush AJ, Gore-Langton RE, Bart G, Bradley KA, Campbell CI, McKay J, et al. Tools to implement measurement-based care (MBC) in the treatment of opioid use disorder (OUD): Toward a consensus. Addict Sci Clin Pract. 2024 Feb 28;19(1):14. PubMed Central PMCID: PMC10902994

  29. [29]

    Measurement-based care using DSM-5 for opioid use disorder: can we make opioid medication treatment more effective? Addiction

    Marsden J, Tai B, Ali R, Hu L, Rush AJ, Volkow N. Measurement-based care using DSM-5 for opioid use disorder: can we make opioid medication treatment more effective? Addiction. 2019 Aug;114(8):1346–53. PubMed Central PMCID: PMC6766896. Page 17 of 21

  30. [30]

    Biased and unbiased estimation in longitudinal studies with informative visit processes

    McCulloch CE, Neuhaus JM, Olin RL. Biased and unbiased estimation in longitudinal studies with informative visit processes. Biometrics. 2016 Dec;72(4):1315–24. doi:10.1111/biom.12501 PubMed Central PMCID: PMC5026863

  31. [31]

    DeBar LL, Bushey MA, Kroenke K, Bobb JF, Schoenbaum M, Thompson EE, et al. A patient-centered nurse- supported primary care-based collaborative care program to treat opioid use disorder and depression: Design and protocol for the MI-CARE randomized controlled trial. Contemp Clin Trials. 2023 Apr;127:107124. PubMed Central PMCID: PMC10065939

  32. [32]

    A new design for randomized clinical trials

    Zelen M. A new design for randomized clinical trials. N Engl J Med. 1979 May 31;300(22):1242–5

  33. [33]

    Zelen design clinical trials: Why, when, and how

    Simon GE, Shortreed SM, DeBar LL. Zelen design clinical trials: Why, when, and how. Trials. 2021 Aug 17;22(1):541. PubMed Central PMCID: PMC8371763

  34. [34]

    Statistical plasmode simulations-Potentials, challenges and recommendations

    Schreck N, Slynko A, Saadati M, Benner A. Statistical plasmode simulations-Potentials, challenges and recommendations. Stat Med. 2024 Apr 30;43(9):1804–25. doi:10.1002/sim.10012

  35. [35]

    A new look at the statistical model identification

    Akaike H. A new look at the statistical model identification. IEEE Trans Autom Control. 1974 Dec;19:716–23

  36. [36]

    Review of methods for handling confounding by cluster and informative cluster size in clustered data

    Seaman S, Pavlou M, Copas A. Review of methods for handling confounding by cluster and informative cluster size in clustered data. Stat Med. 2014 Dec 30;33(30):5371–87

  37. [37]

    On regression adjustment for the propensity score

    Vansteelandt S, Daniel RM. On regression adjustment for the propensity score. Stat Med. 2014 Oct 15;33(23):4053–72

  38. [38]

    Causal Models and Learning from Data: Integrating Causal Modeling and Statistical Estimation

    Petersen ML, van der Laan MJ. Causal Models and Learning from Data: Integrating Causal Modeling and Statistical Estimation. Epidemiology. 2014;25(3):418–26. doi:10.1097/EDE.0000000000000078

  39. [39]

    Causal Inference: What If

    Hernán MA, Robins JM. Causal Inference: What If. Boca Raton: Chapman & Hall/CRC; 2020

  40. [40]

    All”]), a randomly selected PHQ-9 during follow-up (middle column [“Random

    Farzanfar D, Abumuamar A, Kim J, Sirotich E, Wang Y, Pullenayegum E. Longitudinal studies that use data collected as part of usual care risk reporting biased results: A systematic review. BMC Med Res Methodol. 2017 Sep 6;17(1):133. PubMed Central PMCID: PMC5588621. Page 18 of 21 Figure 1. Title: Frequency and timing of PHQ-9 outcomes from retrospective co...