• 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2019-11
  • 2020-03
  • 2020-07
  • 2020-08
  • 2021-03
  • br Reports at participating clinics were updated nightly


    Reports at participating clinics were updated nightly through EHR  Preventive Medicine 120 (2019) 119–125
    data on eligibility, mailing, and FIT completion status, with completion representing processing and reporting of a returned FIT. Four to six months after clinic staff training, a plan-do-study-act (PDSA) im-provement VH-298 was facilitated during which, participating clinics identified strategies to enhance reach or effectiveness (Coury et al., 2017). The STOP CRC intervention had three basic elements (in-troductory letter, FIT kit, and reminder letter); however, organizations tailored implementation to their individual systems.
    The primary study outcome was clinic-level proportions of eligible adults during the accrual interval (February 2014–February 2015) who completed FIT testing within 12 months, or through August 2015 (after which, study tools were made available to usual care clinics). A sec-ondary outcome was the clinic-level proportion of participants re-ceiving any CRC screening (FIT, sigmoidoscopy or colonoscopy) during the evaluation interval. Implementation was calculated as the clinic-level proportion of participants mailed an introductory letter and who subsequently ordered a FIT during the evaluation interval. This allowed mailed FITs to be distinguished from those distributed in-clinic.
    2.5. Lagged analysis
    While the planned analysis included all individuals accrued after EHR tools were provided to clinics on February 4, 2014 (the date of randomization), no clinic began printing letters until at least June 2014; some did not begin until spring 2015. This delay in im-plementation allowed clinics to address site-specific issues, such as conducting staff training in EHR tools, obtaining supplies, and dealing with staff turnover. To account for this delay, analyses were repeated using a “lagged” dataset that included only individuals accrued be-tween June 4, 2014 and February 3, 2015. As with the primary dataset, outcomes were assessed through August 3, 2015, after which inter-vention materials were made available to usual care clinics.
    2.6. Economic outcome
    The primary analytic outcome is an incremental cost-effectiveness ratio (ICER), the additional cost per outcome for an intervention that improves outcomes over a reference strategy (here, usual care). The ICER was calculated as (costi −costuc) / (effecti −effectuc), where i = intervention and uc = usual care. For tractability as well as to ac-count for differences in clinic size across organizations, the number of completed FITs adjusted for number of screening-eligible patients (SEPs) was used as the effect measure, rather than the proportion of such adults with completed FITs. We calculated the ICER overall as well as for each participating organization using both the primary and lagged trial outcomes.
    Intervention delivery costs were defined as the value of resources used to develop, implement, and maintain the screening intervention over the trial period and were measured from the organizational per-spective (Basu, 2016). Research-related costs were excluded. Interven-tion components were classified as labor (e.g., mailing activities) or non-labor (e.g., FIT kits).
    To capture labor resources, the research team developed a series of spreadsheets for clinic staff to complete. The spreadsheets were orga-nized in an activity-based costing format (Lee et al., 2016), dis-aggregating the STOP CRC intervention into a series of activities clas-sified in a few categories: data organization and management, staff training, dissemination process, program management, test processing, and delivery support (Table 1). Program management was defined as billing adjustments, PDSA meetings, and provider engagement