U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Cover of Developing New Methods for Comparing Treatments in Case-Control Studies

Developing New Methods for Comparing Treatments in Case-Control Studies

, MD, ScD, , PhD, , PhD, , MD, MS, , MD, PhD, , MS, , MS, , MS, RD, , PhD, , PhD, and , MD, DrPH.

Author Information and Affiliations

Structured Abstract

Background:

Comparative effectiveness research (CER) and observational analyses of safety and off-target effects of drugs lie at the heart of patient-centered outcomes research. Emulating the design and analysis of a target trial using a cohort design (causal CER methods) reduces potential for several major biases in observational CER. Large cohort studies often use electronic medical record (EMR) data without validating key variables due to feasibility constraints. Using a case-control design provides the opportunity to reduce measurement bias by validating measures of eligibility, exposure, confounders, and outcome using resource-intensive data collection methods such as medical record review within a much smaller population compared with the entire cohort. However, causal CER methods for case-control studies do not yet exist, leaving researchers to choose between using either advanced analytical methods with potentially lower-quality data from a cohort design or a conventional case-control design with higher-quality data.

Objectives:

To reduce bias in CER by developing methods to emulate the design and analysis of a target trial using high-quality measurements and a case-control design.

Methods:

We developed a guideline and an analytical program to emulate the design and analysis of a target trial using a case-control design. The guideline provides details on how the data should be structured and how to use the accompanying SAS macro to implement the observational analogues of intention-to-treat (ITT) and per-protocol analyses in a case-control design. We conducted 2 sets of interviews with 20 investigators and data analysts and incorporated their comments into the guideline. For our clinical example, we used cohort data from Kaiser Permanente Washington and a case-control study (nested within the same system) that had previously validated case and control status using medical record reviews. We emulated the design and analysis of a target trial of statin therapy among healthy participants. The primary outcome was fatal or nonfatal myocardial infarction (MI).

Results:

Using a cohort design to emulate a primary prevention statin trial, we selected 76 020 eligible participants into an analytic cohort. During an average follow-up of 73 months, we observed 1318 events. Compared with a pooled hazard ratio (HR) of 0.69 (95% CI, 0.60-0.79) reported in a meta-analysis of randomized controlled trials, the observational analogue of the ITT HR using a cohort design and outcomes based on ICD codes was 1.01 (0.91-1.12), and the ITT odds ratio (OR) using a case-control design and validated case and control status was 0.80 (0.69-0.92). A case-control analysis that measured eligibility, exposures, and confounders at the index date yielded an adjusted OR of 1.12 (0.96-1.31), which suggests a harmful effect of statins on MI. Adherence to assigned treatment in our study population was low, with 70% of initiators discontinuing treatment within 5 years. After adjusting for imperfect adherence by censoring nonadherent person-times and adjusting for time-varying determinants of adherence via inverse probability weighting, the per-protocol HR was 0.80 (0.67-0.94) in the cohort design using ICD codes and the corresponding OR was 0.71 (0.58-0.87) in the case-control design using validated cases and controls.

Conclusions:

We were able to replicate the results of statin trials on MI prevention using a causal case-control design and outcome data that were validated using medical record reviews. In contrast, a cohort design that did not use validated outcomes produced null results due to measurement error and lower adherence, and a conventional case-control design with validated events yielded substantially biased results due to a different definition of eligibility and exposure and inappropriate adjustment for confounders.

Limitations:

The causal analysis of case-control data requires that the case-control study be nested in a health care system with longitudinal EMR data, which does not apply to many existing studies. Our analyses used only 1 example; further research is needed to examine whether these findings generalize to other drug exposures and health outcomes.

Background

Evaluating the comparative effectiveness and safety of available clinical interventions lies at the heart of patient-centered outcomes research (PCOR). Although randomized controlled trials (RCTs) are considered the preferred study design to provide such evidence, conducting an RCT is not always possible due to practical, ethical, or financial constraints. Therefore, observational studies are needed to provide valuable evidence to guide clinical decision-making. These studies increasingly use electronic medical records (EMRs) or administrative data sets that provide information on various health-related factors. However, measurement error in EMR data combined with the potential for confounding in observational studies may lead to substantial bias in comparative effectiveness studies.1

The potential for measurement error in EMR-based studies is well recognized.2-5 For example, a review of 40 validation studies conducted on the Clinical Practice Research Database in the United Kingdom showed that positive predictive values for acute conditions were less than 50% and there was substantial heterogeneity in the reliability of diagnoses across disease outcomes.6 In another study using EMR data, the relative risk of upper gastrointestinal bleeding due to use of nonsteroidal anti-inflammatory drugs increased from 2.6 when all cases were analyzed to 3.7 when only confirmed cases were included.7 Apart from the existence of the outcome of interest, the exact date of the event8 or the type of events may not be correctly recorded in the data set.9 Similarly, depending on the type of data sets, important eligibility or exclusion criteria such as a history of events or the presence of symptoms of an early-stage disease may require validation. Finally, measurements of treatment and potential confounders may need further validation. Studies have shown that failure to validate measurements of key variables may create substantial bias.2,3,10

Automated validation methods are developed and have been evaluated in specific settings, but in many other settings, such methods require unrealistic assumptions or may lead to increased uncertainty.11-13 Another approach is to validate important measures of eligibility, treatment, confounders, and outcomes through collecting additional data by trained staff using medical record reviews or mailed questionnaires, or linkage to other data sets.5,14 Some recent advances in developing computer software can facilitate this process.15 The high cost of data collection via medical record abstraction and surveys has led many researchers to forego conducting a cohort analysis and instead use a nested case-control design that allows them to spend the same resources on acquiring high-quality information for cases and a relatively small number of controls compared with a full cohort study. Collecting supplemental information about the disease outcomes is even more important in PCOR because patient-centered outcomes often incorporate measures that are multidimensional and are difficult to measure and prone to error.16 What may further increase the importance and complexity of validation is simultaneous use of several data sets to increase statistical power in a pooled analysis as is proposed, for example, in the FDA Sentinel Initiative.17 Different data sets may have different qualities of measurement and each require appropriate, but not necessarily the same, methods of validation.

Previous case-control analyses have demonstrated the feasibility of reducing measurement error by collecting more information on case/control status or eligibility, exposures, and confounders in different clinical contexts, such as the effect of statins on myocardial infarction (MI), and the effect of various drugs on pneumonia10,18 among many others.5,19,20 A modern case-control design using incidence density or risk-set sampling should be conceptualized as a sample of a large cohort and can be used to answer the same causal questions when the quantity of interest is a proportional effect (eg, odds ratio [OR])—as opposed to odds/risk/rate differences—without requiring the rare disease assumption (ie, the outcome occurring with an incidence of ≤10%).21-23 However, case-control studies are prone to bias when they measure exposure and confounders at the index date, which is the date on which the case definition was met or the control was sampled (Figure 1). The main forms of bias that can result from this design choice are (1) prevalent user (or differential survival) bias and (2) bias due to inappropriate adjustment for time-dependent confounders (also known as collider stratification bias).

Figure 1. Schematic Diagram of a Conventional Case-Control Study With Incidence Density Sampling, With 2 Controls Sampled for Case A.

Figure 1

Schematic Diagram of a Conventional Case-Control Study With Incidence Density Sampling, With 2 Controls Sampled for Case A.

Prevalent users are participants who are found to be under treatment at a particular point during follow-up (often at the index date), whereas incident users are participants who have just started a course of treatment without being previously exposed to treatment. Several authors have warned24 or have empirically demonstrated25-27 that comparisons involving prevalent users may lead to bias. Prevalent users have survived and continued treatment, typically free of treatment-limiting adverse effects; if treatment affects survival or if discontinuation of the drug is related to the risk of the study outcome, such survivors are not comparable with nonusers, which results in a biased comparison. This differential survival bias is more obvious when treatment has a short-term effect, for example, the hazardous effects of postmenopausal hormone replacement therapy (HRT) on MI.28

However, this bias can be easily prevented by defining an enrollment date (similar to a trial), which is a date at which eligibility is checked and exposure/treatment is assessed. The investigators can then restrict the eligible participants to those who have not received treatment before this enrollment date and compare initiators (ie, genuine incident users) with noninitiators. In most clinical applications, patients who have used the treatment before can become eligible again if they have stopped the treatment for a sufficiently long period of time (often called a washout period).

Measuring potential confounders at the index date is also problematic, because it may introduce bias if confounders are affected by prior treatment as has been shown29 and illustrated using causal diagrams.30-32 It has been shown that conventional analytic methods will fail when these time-dependent confounders are affected by prior treatment,33 and only the family of g-methods, such as inverse probability weighting (IPW), can incorporate such adjustment without introducing bias.30 A simple solution is to use measurements of confounders before enrollment date as defined in the previous paragraph.

To bridge the gap between nested case-control studies with high-quality data but bias-prone analytic methods and cohort studies that use causal inference methods with data subject to measurement error, we developed novel methods to emulate the design and analysis of a target trial using case-control data.25,34 This conceptualization (Figure 2) reduces the potential for bias by assigning clear enrollment dates for cases and controls, thereby allowing for (1) selecting incident users to prevent the prevalent user or differential survival bias; and (2) measuring confounders before the observed treatment to prevent bias due to adjustment for confounders affected by prior treatment.

Figure 2. Schematic Diagram of the Cases and Controls in Figure 1 When the Case-Control Study is Conceptualized as Emulating a Target Trial.

Figure 2

Schematic Diagram of the Cases and Controls in Figure 1 When the Case-Control Study is Conceptualized as Emulating a Target Trial.

To implement such analyses, one would have to collect longitudinal data on all cases and selected controls and generate a set of emulated nonrandomized trials. Once the structure of the emulated trials has been created (as detailed in Section 2 of Appendix A), one can estimate the observational analogue of the intention-to-treat (ITT) effect and use IPW to estimate the analogue of a per-protocol effect. We previously developed and applied these methods for cohort designs using data from prospective studies25 and EMRS,35 but here we describe how similar methods can be applied to case-control studies.

For our clinical example, we implemented these methods using data from Kaiser Permanente Washington (KPWA, formerly Group Health), a large integrated health care delivery system in the northwest United States. This setting provided a great opportunity because we had access to both the full data set of electronic health data that we used to create a cohort study, as well as data from previously conducted nested case-control studies.36,37 Our team had already validated the diagnosis of >3500 cases of MI and 9000 matched controls. The combination of the 2 data sources allowed us to compare the results of an analysis using unvalidated outcomes and a cohort design (Section 2 of Appendix A) with that of validated cases using a case-control design (Section 3 of Appendix A).

This report is structured as follows: we first describe the methods and results of our interviews with 20 clinicians, scientists, and data analysts regarding the current biases in case-control analyses and the potential use of novel methods to emulate a trial. Within the Methods section, we first describe the protocol of the target trial that we have emulated using observational data. The sections on emulating the target trial using EMR then explain how one can use a cohort or case-control design to emulate this trial and estimate the ITT and per-protocol effects. In each section, we also present the results of our clinical example.

Stakeholder Engagement

Engagement of patients and health care stakeholders is increasingly recognized as essential to answer questions of importance to patients and their caregivers when planning a new study.38 Because our proposal intended to develop and apply advanced analytic methods, patient involvement was not appropriate. Instead, we identified investigators and data analysts as our main stakeholders and engaged them throughout the process of research. The engagement of stakeholders across the phases of this study allowed us to obtain feedback on the relevance of research results for decision-making and improving our guideline for using administrative data sets and EMRs in nested case-control studies. We conducted 2 semistructured interviews with each of the stakeholders: (1) at an early stage of the project, and (2) after developing the draft methods and guideline (end of year 1). A summary of study procedures and the results for each set of interviews are presented in Appendix B.

Summary of Stakeholder Engagement

In the first set of interviews, participants supported the idea that developing advanced methods to analyze case-control studies as if they arose from a ‘target trial’ would be a major contribution to advance comparative effectiveness research (CER). Participants emphasized that the conceptualization of the methods is the most important part of the work and suggested avoiding technical language in the guideline. They consistently suggested that providing a clinical example with a data set along with the SAS macro, explaining the methods step by step, and illustrating the procedure using diagrams would be helpful for future users. We incorporated most of their suggestions in the development of the guideline; however, some of their suggestions were beyond the scope of this project (eg, expanding the methods for case-cohort design). Our team will consider these topics for future projects.

In the second set of interviews, participants supported the idea that a guideline and analytical code would be a major contribution to advance CER. Participants reported that the guideline was well-written and covered most of the important points. They suggested adding 2 sections at the beginning of the guideline: the intended audience for the document and how to use the document. Most participants emphasized that providing a simulated toy data set along with a “macro call” would be a great help to future users. Participants also suggested that further clarifications on data structure would be helpful, and the visualization of tables and guideline could be improved. We incorporated most of their suggestions in improving the guideline; however, some of their suggestions were beyond the scope of this project and we did not have enough time or resources to apply them (eg, creating a toy data set or using alternative approaches for handling missing data).

Methods

The first step in emulating the design and analysis of a target trial is to draft a detailed protocol for it. We have provided general guidance on different sections of the protocol in the accompanying guideline (Section 2.1 of Appendix A). Here, we outline the protocol for the target trial that was emulated in our clinical example.

Target Trial for the Clinical Example

The target trial we emulated aimed to evaluate the effect of statin therapy on fatal or nonfatal MI among participants with no history of MI or other major comorbidities (ie, stroke, stable and unstable angina pectoris, coronary artery bypass graft [CABG], and peripheral artery disease [PAD]) (Table 1). We modeled our target trial protocol based on the protocol for the JUPITER trial (Table 1) with some important modifications.39 The JUPITER trial was designed to assess whether apparently healthy persons with normal levels of low-density lipoprotein (LDL) cholesterol, but with high levels of C-reactive protein (CRP) at ≥2 mg/L, would benefit from taking rosuvastatin. After a 4-week placebo run-in period, 17 802 men and women between the ages of 50 and 60 years who met the study eligibility criteria were randomly assigned to receive either rosuvastatin 20 mg/day or matching placebo. Participants were expected to be followed for 5 years. The trial was terminated early at a median follow-up of 1.9 years after 142 first major cardiovascular events had been observed in the treatment group and 251 in the control, due to clear evidence of benefit with rosuvastatin compared with placebo. In the ITT analysis, rosuvastatin significantly reduced the incidence of fatal and nonfatal MI (hazard ratio [HR], 0.46; 95% CI, 0.30-0.70).

Table Icon

Table 1

Summary of the Protocols of a Target Trial and Emulated Trials to Estimate the Effect of Statin Use on the Risk of MI or Death From MI.

Our more pragmatic target trial is in some important aspects different from JUPITER (Table 1). We included a wider age range (patients aged 30-79.5 years) and did not restrict the population to those with normal LDL cholesterol or high CRP to increase generalizability. We had a much longer follow-up time (20 years maximum follow-up, compared with a median of 2 years). Our target trial compared initiating any statin (ie, fluvastatin, lovastatin, pravastatin, simvastatin, atorvastatin, pitavastatin, or rosuvastatin) at a low, moderate, or high dose vs not initiating statins and otherwise receiving usual care, whereas JUPITER compared a specific dose of rosuvastatin with placebo. Participants in JUPITER underwent a run-in phase and only those with acceptable compliance were included. We could not emulate the run-in period; therefore, adherence was much lower in our target trial. Loss to follow-up was much lower in JUPITER than in our emulated trial. JUPITER excluded patients with diabetes, hyperthyroidism, and liver diseases, as well as medication use (eg, hormone therapy), whereas we included these patients in our target trial to achieve adequate precision of the estimate and adjusted for these factors in the analysis.

Similar to JUPITER, our target trial was restricted to patients with no prior statin use, and no history of inpatient or outpatient MI (ICD-9 codes 412 and 410) or other major comorbidities (stroke, stable and unstable angina pectoris, CABG, and PAD). The primary end point of our target trial was the first occurrence of an acute hospitalized MI (ICD-9; 410.XX), or death due to MI (based on ICD-10 codes or any death within 28 days after a nonfatal MI). The diagnosis of MI would be validated using medical record reviews. The ITT effect would be estimated by comparing the risk of MI in the treatment arm with that of the control arm, which required that patients assigned to initiation of statin and noninitiation be analyzed in the same group they were assigned to, even if they deviated from their assigned treatment strategy after randomization. However, the ITT effect depends heavily on the magnitude and determinants of adherence to treatment. To account for imperfect adherence, one can estimate the per-protocol effect, that is, the effect that would have been observed if all individuals had followed their assigned treatment strategy.

Emulating the Target Trial Using EMR: Cohort Design

We have explained in detail elsewhere35,40 and in the accompanying guideline how one can emulate the target trial using a cohort design (Section 2.2 of Appendix A). In the simplest form, one can assign the entire period of data that is available via the database as the enrollment period of a single nonrandomized trial. The analyst can then assess eligibility for all patients in the database; as soon as a patient becomes eligible, the analyst can enroll them in the trial, assess their confounders before enrollment, and check their assigned treatment. These patients can then be followed up using the database for first occurrence of censoring, death, or event. This simple approach would allow each patient to contribute to up to 1 trial. A more efficient approach would be to use a much shorter duration for the enrollment period, say 1 month, and for each of these periods to create a small nonrandomized trial using the database. In this modified approach, each patient can contribute to more than 1 trial, and the within-person correlation of these observations can be adjusted for using the sandwich variance estimator, which appropriately inflates the variance.

Within each enrollment month (eg, January 1994), one can identify eligible patients for an emulated trial by assessing eligibility in that month. Often, an important eligibility criterion is that patients should be enrolled in the health system at least 1 year before the enrollment month. In addition, patients should not have used the treatment of interest within the recent past and must have had a recent (eg, within the past 6 months) measurement of confounders. Eligible patients will then be divided into initiators and noninitiators depending on whether or not they filled a prescription for the drug of interest within the enrollment month, and will be followed using EMR data for the occurrence of event, death, loss to follow-up, or administrative end of follow-up. A subsequent trial would enroll patients during February 1994, another during March 1994, and so on. Using this process, a sequence of nested emulated trials can be created. We hereafter refer to each copy of a patient that gets enrolled in a month as a person-trial. For each person-trial, one should ensure that confounders are measured before treatment initiation by assigning a treatment date. For initiators, the treatment date can be naturally set to the date of the filled prescription. For noninitiators, the choice is rather arbitrary, and one can use the first, middle, or last day of the enrollment month. Covariate information for each person-month should be derived from information in the database related to the 30-day period before that month's treatment date.

Once the sequential nested trials are generated using the above process, the observational analogue of the ITT effect can be estimated using a Cox proportional hazard model fit to the pooled data from all person-trials. Alternatively, as information for time-varying covariates will be needed to adjust for censoring due to loss to follow-up in ITT analysis or due to protocol deviations in per-protocol analyses, one can expand each person-trial to include follow-up time (see details in the guideline) and fit a pooled logistic regression model to the expanded data set. The OR from this model will approximate the HR from the Cox proportional hazard model with the only difference being that the intercept is now being estimated rather than conditioned out of the likelihood. We suggest using a flexible functional form for the intercept to allow changes in disease rate over time.

For our clinical example, we emulated a target trial using observational EMR data from KPWA (formerly Group Health). KPWA is a large, integrated health care system in the northwest United States. KPWA data have also been used for a series of ongoing case-control studies of incident MI, stroke cases, and sudden cardiac arrest cases with a shared common control group.36,37,41 Therefore, this setting provided a great opportunity because we had access to both the EMR data, which were used to create a cohort study, and the data from previously conducted nested case-control studies. We used data from January 1, 1993, through December 31, 2014, with the first year of data being used solely to evaluate eligibility.

To emulate the target trial using a cohort design, we identified 175 000 potentially eligible patients in the KPWA data sets. KPWA maintains computerized data on diagnoses, hospitalizations, procedures, outpatient visits, laboratory results, vital signs, and prescriptions. Information about statin prescription fills were derived from a pharmacy data set, which included all outpatient prescription fills at KPWA pharmacies as well as prescription claims submitted by outside pharmacies. Pharmacy data included a unique patient identifier, medication name, strength, route of administration, date dispensed, quantity dispensed, and days supplied. We chose a large set of potential confounders based on a causal graph (Figure 3). However, similar to other EMR data sets, we did not have data on diet or physical activity, which may lead to some unmeasured confounding. Data on blood pressure and body mass index (BMI) were available from 2005 onward. Therefore, we did not include blood pressure in the main analyses. However, we conducted sensitivity analyses by restricting enrollment to 2005 onward and additionally adjusting for blood pressure and BMI. The standard practice for dyslipidemia screening within this health system for much of the 1990s and early 2000s was to use non-high-density lipoprotein (HDL) cholesterol rather than LDL cholesterol, leading to high rates of missing information on LDL cholesterol (30% across years of follow-up; see Appendix B Figures 1 and 2). We adjusted for total and HDL cholesterol in the main analysis and additionally for LDL cholesterol in a sensitivity analysis.

Figure 3. DAG for the Relationship Between Initiation of Statin Therapy and MI.

Figure 3

DAG for the Relationship Between Initiation of Statin Therapy and MI.

Compared with the target trial described in Table 1, our emulated trials using EMR differed in the following ways:

  • Statin initiation was not assigned at random. Patients with some worse prognostic factors (eg, worse lipid profile, older age, having comorbidities) may have been more likely to initiate statins. On the other hand, patients more likely to initiate statin treatment for primary prevention may have been healthier in unmeasured ways.
  • Diagnosis of MI was based on ICD codes, leading to potential error in case ascertainment as well as potential differences in the baseline population (ie, inclusion of patients with MI before study entry).
  • Nonadherence to treatment as measured by absence of refills was much more common; therefore, the ITT effect was closer to the null.
  • Patients received a wide range of dosages of statin therapy compared with the target trial. We conducted sensitivity analyses based on intensity of statin therapy.

Emulating the Target Trial Using Case-Control Design

Having briefly reviewed how the target trial can be emulated using a cohort design, we emphasize again that a case-control design is just a way of sampling from the underlying cohort and allows estimation of the same proportional effects. Specifically, under an incidence density or risk-set sampling, the OR from a case-control study approximates the incidence rate ratio from the underlying cohort study.21,23 However, measures of absolute risk such as risk or rate differences or survival probabilities cannot be directly estimated from a case-control study unless the control population is a random sample of the target population for inference, in which case methods of standardization can be used to estimate absolute risks.

We also note that the difference between the risk-set and incidence density sampling is that in risk-set sampling, controls are matched to cases by the index date (creating risk-sets similar to those used in survival analysis using a Cox proportional hazard model), whereas in incidence density sampling, controls are sampled from the entire pool of person-time irrespective of the index date for the case. For causal case-control analyses, we suggest using incidence density sampling, as this simplifies the sampling process. In Tables 2a and 2b, we explain the step-by-step process of creating a data set of nested sequential trials to emulate a target trial using a case-control design. Section 2.3 in the accompanying guideline (Appendix A) provides more detail on the data structure and how to conduct the analyses.

Table Icon

Table 2a

Main Steps to Prepare the Analytical Data Set and Find Case Person-Trials.

Table Icon

Table 2b

Main Steps to Select Control Person-Trials and Create Analytic Data Set.

This process allows users to conduct medical record reviews to validate measurements of eligibility, exposure, confounders, and outcome among cases and sampled controls. In our clinical example, the outcomes had already been validated using medical record review among cases and controls, and we extracted additional data from EMR on eligibility, treatment, and confounders.

Once the analytical data set is created, the analogue of the ITT and per-protocol effect can be estimated using similar methods explained for the cohort design, with 2 important differences. First, in instances where cases and controls were matched on a set of covariates, the same covariates should be included in the outcome model for all analyses (and preferably in the same functional form). Second, for per-protocol analyses, the IPWs should be estimated in the control population but applied to both cases and controls.42

For our clinical example, we used data from Heart and Vascular Health (HVH), which is a population-based nested case-control study among enrollees of KPWA (1994-2010).36,37,41 All study participants were KPWA members between the ages of 30 and 79.5 years. MI cases were identified from hospital discharge diagnosis codes and were validated by medical record review (3500 cases); absence of a prior MI diagnosis among the controls was also verified by medical record review. In the original HVH study, controls were frequency matched to cases based on age (within decade), sex, treated hypertension, and calendar year of event (9000 controls). Information on statin use and potential confounders was also collected from medical record reviews, but because this information was collected around the index date (ie, the date of case-control sampling), we obtained additional data from KPWA data sets. We emulated each target trial as a sequence of trials that started at each of the 198 months between January 1994 and June 2010, similar to the cohort design; we then pooled all these trials and fit a single model for the outcome after adjusting for the enrollment month.

Results

Cohort Design Using EMR

We identified 175 000 potentially eligible patients in the KPWA data sets. Of those, 72 620 were eligible for at least 1 of the emulated monthly trials (Figure 4). The main reasons for exclusion were having used statins in the past year and not having data on selected potential confounders in the past 6 months. Overall, characteristics of ineligible and eligible patients were similar. However, eligible patients included a higher proportion of statin initiators, antihypertensive users, and individuals with other diagnosed comorbidities, including diabetes and hypertension (Appendix B: Table 1).

Figure 4. Flowchart of Person-Trials in the Analysis Using EMR With Cohort Design (1994-2014).

Figure 4

Flowchart of Person-Trials in the Analysis Using EMR With Cohort Design (1994-2014).

The eligible patients contributed to 952 732 emulated person-trials (average of 13 trials per eligible person). A total of 1.4% of the eligible person-trials initiated statin therapy. During an average follow-up of 73 months, we observed 16 928 cases of fatal or nonfatal MI (contributed by 1318 unique people) and 28 323 non-MI deaths (2183 unique people). Approximately 30% of person-trials were lost to follow-up, mainly due to disenrollment from the KPWA health care system.

Statin initiators were generally less healthy than were noninitiators, indicating potential for positive confounding by indication (Table 3). They were on average older and had a higher prevalence of diabetes and smoking as well as higher mean BMI, total cholesterol, LDL cholesterol, and blood pressure.

Table Icon

Table 3

Baseline Characteristics of Eligible Initiators and Noninitiator Person-Trials of Statin Therapy: EMR With Cohort Design, 1994-2014.

The ITT HR was 1.01 (95% CI, 0.91-1.12) after adjusting for baseline confounders (Table 4). Further adjustment for BMI and blood pressure values, which were only available from 2005 onward, did not change the results. The results remained null after further adjustment for LDL cholesterol (Appendix B: Table 3). We did not observe any significant interactions between treatment and age, sex, and calendar year. Using a Cox proportional hazard model and only 1 observation per person (the first time they became eligible), we estimated an HR of 1.02 (0.73-1.44). We also conducted a case-control analysis nested within these data (randomly selected 5 controls for each case): the estimated HR for ITT was 1.00 (95% CI, 0.90-1.10) (Appendix B: Table 3).

Table Icon

Table 4

Association Between Statin Therapy and Risk of Fatal and Nonfatal MI: EMR With Cohort Design, 1994-2014.

Adherence to assigned treatment was rather low, with 43% of statin initiators apparently discontinuing treatment within 1 year and 70% within 5 years. Also, 5% of noninitiators started treatment within 1 year and 23% within 5 years (Figure 5).

Figure 5. Adherence to Treatment by Initiation Status: EMR With Cohort Design, 1994-2014.

Figure 5

Adherence to Treatment by Initiation Status: EMR With Cohort Design, 1994-2014.

The per-protocol HR after censoring nonadherent person-trials and adjusting for determinants of adherence using IPW was 0.80 (0.67-0.94).

Case-Control Design Using Validated Cases and Controls

We emulated each target trial as a sequence of trials that started at each of the 198 months between January 1994 and June 2010 using data on validated cases and controls from the HVH study supplemented by the EMR data from the KPWA. We used incidence density sampling for sampling 5 controls per case. The average duration of follow-up was 30 months for initiators and 37 months for noninitiators.

Out of 10 128 cases and controls in the HVH study, 4724 met eligibility criteria for at least 1 of the emulated monthly trials (Figure 6). There were 15 263 eligible cases (contributed by 1221 unique cases), and we sampled 5 controls for each case based on incidence density sampling (n = 76 315). Compared with ineligible patients, eligible patients had a slightly higher risk profile with a higher prevalence of comorbidities (Appendix B: Table 2). A total of 1.9% of the eligible case person-trials and 1.5% of sampled control person-trials initiated statin therapy.

Figure 6. Flowchart of Person-Trials in the Analysis Using EMR With Case-Control Design Incidence Density Sampling: All Validated Cases and Sampled Validated Controls, 1994-2010.

Figure 6

Flowchart of Person-Trials in the Analysis Using EMR With Case-Control Design Incidence Density Sampling: All Validated Cases and Sampled Validated Controls, 1994-2010.

Statin initiators were generally less healthy than noninitiators, indicating potential for positive confounding by indication (Table 5). They were on average older and had a higher prevalence of diabetes and smoking as well as higher mean BMI, total cholesterol, LDL cholesterol, and blood pressure.

Table Icon

Table 5

Baseline Characteristics of Eligible Initiators and Noninitiator Person-Trials of Statin Therapy: EMR With Case-Control Design and Sampled Validated Cases and Controls, 1994-2010.

The pooled logistic regression model for MI that adjusted for baseline confounders among all eligible person-trials gave an ITT OR of 0.80 (95% CI, 0.69-0.92; Table 6). Adherence to assigned treatment was low, with 41% of statin initiators discontinuing treatment within 1 year and 64% within 5 years. Similarly, 8% of noninitiators started treatment within 1 year and 38% within 5 years (Figure 7).

Table Icon

Table 6

Association Between Statin Therapy on Risk of Fatal and Nonfatal MI: EMR With Case-Control Design and Validated Cases and Controls, 1994-2010.

Figure 7. Adherence to Treatment by Initiation Status: EMR With Case-Control Design Among Validated Sampled Controls, 1994-2010.

Figure 7

Adherence to Treatment by Initiation Status: EMR With Case-Control Design Among Validated Sampled Controls, 1994-2010.

The per-protocol OR after censoring nonadherent person-trials and adjusting for determinants of adherence using IPW was 0.71 (0.58-0.87).

Comparison of Different Analytical Methods and Data Sets

To make the follow-up time consistent between the cohort analysis using EMR and the case-control analysis with validated cases and controls, we truncated the follow-up time in the cohort analysis at December 2010. Figure 8 shows the summary of results obtained by using different methods and data sets.

Figure 8. Effect of Statin Therapy on Risk of Fatal and Nonfatal MI Using EMR With Validated and Nonvalidated Cases and Controls, 1994-2010.

Figure 8

Effect of Statin Therapy on Risk of Fatal and Nonfatal MI Using EMR With Validated and Nonvalidated Cases and Controls, 1994-2010.

Discussion

We developed and presented methods to combine high-quality data from case-control studies and advanced methods to emulate the design and analysis of a target trial, a strategy that can substantially reduce the potential for bias in observational comparative effectiveness studies. To help PCOR and other CER investigators implement similar analyses, we present a detailed guideline (Appendix A) and accompanying SAS programs and a clinical example. Our stakeholder interviews pointed to the potential importance of these methods in limiting bias in conducting CER.

In theory, there is no causal relationship that can be studied by cohort design that cannot also be studied by case-control design with sampling. Well-designed case-control studies are more efficient designs to validate or collect additional data on a subsample rather than the entire cohort population. The results of our clinical example showed that the proposed causal case-control methods using validated cases and controls can provide estimates of the ITT effect size for statins and acute MI that are consistent with those observed in meta-analyses of RCTs. Specifically, we estimated an ITT OR of 0.80 (95% CI, 0.69-0.92) compared with an HR of 0.69 (0.60-0.79) that was reported in a meta-analysis of primary prevention trials.27 The smaller protective effect estimated in the observational analysis could be explained by unmeasured confounding, much longer follow-up time, and higher rates of nonadherence in our study population compared with those enrolled in clinical trials.

In contrast, both the ITT causal cohort analysis estimate and the conventional case-control analysis estimate were inconsistent with the result of the meta-analysis of RCTs. In the causal cohort analysis, which used ICD codes to define the outcome, the HR for ITT was 1.01 (95% CI, 0.91-1.12). The null result in the cohort ITT analysis was shown partly to be due to imperfect adherence: the per-protocol HR was 0.80 (0.67-0.94) after censoring nonadherent person-times and adjusting for time-varying determinants of adherence using IPW. In addition, ICD codes had imperfect sensitivity and specificity to detect MI. Comparing case/control status per ICD codes with those validated by medical record review for MI, sensitivity was 95% among noninitiators and 96% among initiators, and specificity was 98% among both groups. The sensitivity and specificity were similar to the previously published results based on HVH data.43 It is easy to show that even such small imperfection in specificity would create a substantial bias toward the null, for example, rendering a relative risk of 0.93 when the true effect size is 0.69. The conventional case-control analysis using validated case/control status but covariate and treatment measures at index date produced an OR for statin use of 1.12 (95% CI, 0.96-1.31) (Appendix B: Table 4). The increased risk among statin users in that analysis could be due to a combination of differential survival (or prevalent user) bias and collider stratification bias (due to adjustment for confounders at the index time).

Strengths and Limitations

The proposed methods have many major strengths compared with conventional analytical methods used in case-control studies that evaluate eligibility, treatment, and confounder information at the index date. Emulating the design and analysis of a target trial helps reduce bias due to differential survival (also known as prevalent user bias) by defining intervention as initiating a treatment among those who were not treated at study enrollment. In addition, using the measures of confounders before treatment assignment prevents inappropriate adjustment for factors that may be affected by prior treatment, which can lead to collider stratification bias. Furthermore, drafting a protocol for the target trial can clarify the eligibility criteria and treatment and outcome definitions and can therefore lead to much more intuitive interpretation of the estimated effect sizes in comparison with those of RCTs. Compared with prior methods proposed to emulate the design and analysis of a target trial using a cohort design, the case-control design allows future PCOR and CER researchers to focus resources on collecting high-quality (and expensive) measures of exposure, confounders, and outcomes efficiently. Our clinical example illustrates the benefit of implementing these methods compared with either using data from EMR without validation or using validated data from a conventional case-control study.

The proposed methods have several limitations and may not be used in all settings. First, the methods require time-varying data on eligibility, exposure, and confounders. Therefore, such analysis cannot be implemented with case-control studies that have only measured such factors at the index date or have measured them only sporadically before the index date. Additional data will need to be collected or planned for at the outset. In our clinical example, we resolved this problem by using the EMR data sets and pulling additional data on these factors for our selected cases and controls. However, even in the context of a high-quality EMR system such as ours, information on major potential confounders may not be available for all potentially eligible individuals during the period of study. Therefore, investigators must either limit the eligible population to those with recent measurements of major confounders, which reduces sample size, power, and generalizability; or use imputation models, which introduce additional uncertainty.

Second, the proposed structure of sequential nested trials, which maximizes the use of available data and therefore improves precision in the effect estimates, is conceptually complicated and computationally intensive. In our clinical example, we created almost 1 million person-trials, each with a potential follow-up of 250 months. Therefore, the analytical data sets are rather large and may require a dedicated server. We have provided guidance on how to run the code in several batches, especially when the analytical data set is large. The proposed IPW to adjust for imperfect adherence and differential loss to follow-up is sensitive to violations of the positivity assumption. Such violations are more common with longer durations of follow-up and may lead to undue influence of a few observations. We propose several ways to reduce the potential for such violations in the accompanying guideline (Appendix A).

Finally, the proposed methods, especially the sequential nested trial structure combined with time-varying IPW, is conceptually and analytically complicated. Therefore, many scholars prefer to use the simpler methods, which may be inefficient or biased. The accompanying guideline (Appendix A) aims to resolve this issue by providing detailed guidance on how to conceptualize the methods, prepare the analytical data sets, and conduct the analysis using the provided SAS macro.

Conclusions

PCOR and CER can benefit substantially from combining high-quality data and validated measures from case-control studies, with novel methods presented here that allow researchers to emulate the design and analysis of a target trial. The accompanying guideline and analytical code (Appendix A) allow scholars to implement these analyses. Stakeholder engagement results showed substantial interest in using these methods and allowed us to fine-tune our guideline and code.

References

1.
Hernán MA. With great data comes great responsibility: publishing comparative effectiveness research in epidemiology. Epidemiology. 2011;22(3):290-291. [PMC free article: PMC3072432] [PubMed: 21464646]
2.
Weiss NS. The new world of data linkages in clinical epidemiology: are we being brave or foolhardy? Epidemiology. 2011;22(3):292-294. [PubMed: 21464647]
3.
Ray WA. Improving automated database studies. Epidemiology. 2011;22(3):302-304. [PubMed: 21464650]
4.
Garcia Rodriguez LA, Ruigomez A. Case validation in research using large databases. Br J Gen Pract. 2010;60(572):160-161. [PMC free article: PMC2828828] [PubMed: 20202361]
5.
Hernán MA, Jick SS, Olek MJ, Jick H. Recombinant hepatitis B vaccine and the risk of multiple sclerosis: a prospective study. Neurology. 2004;63(5):838-842. [PubMed: 15365133]
6.
Khan NF, Harrison SE, Rose PW. Validity of diagnostic coding within the General Practice Research Database: a systematic review. Br J Gen Pract. 2010;60(572):e128-e136. doi:10.3399/bjgp10X483562 [PMC free article: PMC2828861] [PubMed: 20202356] [CrossRef]
7.
Garcia Rodriguez LA, Barreales Tolosa L. Risk of upper gastrointestinal complications among users of traditional NSAIDs and COXIBs in the general population. Gastroenterology. 2007;132(2):498-506. [PubMed: 17258728]
8.
Margulis AV, Garcia Rodriguez LA, Hernández-Díaz S. Positive predictive value of computerized medical records for uncomplicated and complicated upper gastrointestinal ulcer. Pharmacoepidemiol Drug Saf. 2009;18(10):900-909. [PubMed: 19623573]
9.
Gaist D, Wallander MA, Gonzalez-Perez A, Garcia-Rodriguez LA. Incidence of hemorrhagic stroke in the general population: validation of data from The Health Improvement Network. Pharmacoepidemiol Drug Saf. 2013;22(2):176-182. [PubMed: 23229888]
10.
Dublin S, Walker RL, Jackson ML, Nelson JC, Weiss NS, Jackson LA. Use of proton pump inhibitors and H2 blockers and risk of pneumonia in older adults: a population-based case-control study. Pharmacoepidemiol Drug Saf. 2010;19(8):792-802. [PMC free article: PMC2938739] [PubMed: 20623507]
11.
Collin LJ, Riis AH, MacLehose RF, et al. Application of the adaptive validation substudy design to colorectal cancer recurrence. Clin Epidemiol. 2020;12:113-121. [PMC free article: PMC7007499] [PubMed: 32099477]
12.
Holcroft CA, Spiegelman D. Design of validation studies for estimating the odds ratio of exposure-disease relationships when exposure is misclassified. Biometrics. 1999;55(4):1193-1201. [PubMed: 11315067]
13.
Lyles RH, Tang L, Superak HM, et al. Validation data-based adjustments for outcome misclassification in logistic regression: an illustration. Epidemiology. 2011;22(4):589-597. [PMC free article: PMC3454464] [PubMed: 21487295]
14.
Munger KL, Zhang SM, O'Reilly E, et al. Vitamin D intake and incidence of multiple sclerosis. Neurology. 2004;62(1):60-65. [PubMed: 14718698]
15.
Egbring M, Kullak-Ublick GA, Russmann S. Phynx: an open source software solution supporting data management and web-based patient-level data review for drug safety studies in the general practice research database and other health care databases. Pharmacoepidemiol Drug Saf. 2010;19(1):38-44. [PubMed: 19777533]
16.
Washington AE, Lipstein SH. The Patient-Centered Outcomes Research Institute--promoting better information, decisions, and health. N Engl J Med. 2011;365(15):e31. [PubMed: 21992473]
17.
Platt R, Wilson M, Chan KA, Benner JS, Marchibroda J, McClellan M. The new Sentinel Network--improving the evidence of medical-product safety. N Engl J Med. 2009;361(7):645-647. [PubMed: 19635947]
18.
Dublin S, Jackson ML, Nelson JC, Weiss NS, Larson EB, Jackson LA. Statin use and risk of community acquired pneumonia in older people: population based case-control study. BMJ. 2009;338:b2137. doi:10.1136/bmj.b2137 [PMC free article: PMC2697311] [PubMed: 19531550] [CrossRef]
19.
Hippisley-Cox J, Coupland C. Effect of combinations of drugs on all cause mortality in patients with ischaemic heart disease: nested case-control analysis. BMJ. 2005;330(7499):1059-1063. [PMC free article: PMC557227] [PubMed: 15879390]
20.
Varas-Lorenzo C, Garcia-Rodriguez LA, Perez-Gutthann S, Duque-Oliart A. Hormone replacement therapy and incidence of acute myocardial infarction. A population-based nested case-control study. Circulation. 2000;101(22):2572-2578. [PubMed: 10840007]
21.
Rodrigues L, Kirkwood BR. Case-control designs in the study of common diseases: updates on the demise of the rare disease assumption and the choice of sampling scheme for controls. Int J Epidemiol. 1990;19(1):205-213. [PubMed: 2190942]
22.
Miettinen O. Estimability and estimation in case-referent studies. Am J Epidemiol. 1976;103(2):226-235. [PubMed: 1251836]
23.
Vandenbroucke JP, Pearce N. Case-control studies: basic concepts. Int J Epidemiol. 2012;41(5):1480-1489. [PubMed: 23045208]
24.
Ray WA. Evaluating medication effects outside of clinical trials: new-user designs. Am J Epidemiol. 2003;158(9):915-920. [PubMed: 14585769]
25.
Hernán MA, Alonso A, Logan R, et al. Observational studies analyzed like randomized experiments: an application to postmenopausal hormone therapy and coronary heart disease. Epidemiology. 2008;19(6):766-779. [PMC free article: PMC3731075] [PubMed: 18854702]
26.
Schneeweiss S, Patrick AR, Sturmer T, et al. Increasing levels of restriction in pharmacoepidemiologic database studies of elderly and comparison with randomized trial results. Med Care. 2007;45(10 Suppl 2):S131-S142. [PMC free article: PMC2905666] [PubMed: 17909372]
27.
Danaei G, Tavakkoli M, Hernán MA. Bias in observational studies of prevalent users: lessons for comparative effectiveness research from a meta-analysis of statins. Am J Epidemiol. 2012;175(4):250-262. [PMC free article: PMC3271813] [PubMed: 22223710]
28.
Manson JE, Hsia J, Johnson KC, et al. Estrogen plus progestin and the risk of coronary heart disease. N Engl J Med. 2003;349(6):523-534. [PubMed: 12904517]
29.
Robins JM. A new approach to causal inference in mortality studies with a sustained exposure period-application to control of the healthy worker survivor effect. Math Model. 1986;7(9-12):1393-1512.
30.
Robins JM, Hernán MA. Estimation of the causal effects of time-varying exposures. In: Fitzmaurice G, Davidian M, Verbeke G, Monenberghs G, eds. Longitudinal Data Analysis. Chapman & Hall/CRC; 2009:553-599.
31.
Hernán MA, Hernández-Díaz S, Robins JM. A structural approach to selection bias. Epidemiology. 2004;15(5):615-625. [PubMed: 15308962]
32.
Greenland S. Quantifying biases in causal models: classical confounding vs collider-stratification bias. Epidemiology. 2003;14(3):300-306. [PubMed: 12859030]
33.
Hernán MA, Robins JM. Estimating causal effects from epidemiological data. J Epidemiol Community Health. 2006;60(7):578-586. [PMC free article: PMC2652882] [PubMed: 16790829]
34.
Dickerman BA, Garcia-Albeniz X, Logan RW, Denaxas S, Hernan MA. Emulating a target trial in case-control designs: an application to statins and colorectal cancer. Int J Epidemiol. 2020;49(5):1637-1646. [PMC free article: PMC7746409] [PubMed: 32989456]
35.
Danaei G, Rodriguez LA, Cantero OF, Logan R, Hernán MA. Observational data for comparative effectiveness research: an emulation of randomised trials of statins and primary prevention of coronary heart disease. Stat Methods Med Res. 2013;22(1):70-96. [PMC free article: PMC3613145] [PubMed: 22016461]
36.
Psaty BM, Heckbert SR, Koepsell TD, et al. The risk of myocardial infarction associated with antihypertensive drug therapies. JAMA. 1995;274(8):620-625. [PubMed: 7637142]
37.
Psaty BM, Smith NL, Lemaitre RN, et al. Hormone replacement therapy, prothrombotic mutations, and the risk of incident nonfatal myocardial infarction in postmenopausal women. JAMA. 2001;285(7):906-913. [PubMed: 11180734]
38.
Sox HC, Greenfield S. Comparative effectiveness research: a report from the Institute of Medicine. Ann Intern Med. 2009;151(3):203-205. [PubMed: 19567618]
39.
Ridker PM, Danielson E, Fonseca FA, et al. Rosuvastatin to prevent vascular events in men and women with elevated C-reactive protein. N Engl J Med. 2008;359(21):2195-2207. [PubMed: 18997196]
40.
Danaei G, Garcia Rodriguez LA, Cantero OF, Logan RW, Hernán MA. Electronic medical records can be used to emulate target trials of sustained treatment strategies. J Clin Epidemiol. 2018;96:12-22. [PMC free article: PMC5847447] [PubMed: 29203418]
41.
Smith NL, Blondon M, Wiggins KL, et al. Lower risk of cardiovascular events in postmenopausal women taking oral estradiol compared with oral conjugated equine estrogens. JAMA Intern Med. 2014;174(1):25-31. [PMC free article: PMC4636198] [PubMed: 24081194]
42.
Newman SC. Causal analysis of case-control data. Epidemiol Perspect Innov. 2006;3:2. [PMC free article: PMC1431532] [PubMed: 16441879]
43.
Floyd JS, Blondon M, Moore KP, Boyko EJ, Smith NL. Validation of methods for assessing cardiovascular disease using electronic health data in a cohort of Veterans with diabetes. Pharmacoepidemiol Drug Saf. 2016;25(4):467-471. [PMC free article: PMC4826840] [PubMed: 26555025]

Acknowledgments

We would like to thank all the participants in our stakeholder engagement.

Research reported in this report was funded through a Patient-Centered Outcomes Research Institute® (PCORI®) Award (ME-1609-36748). Further information available at: https://www.pcori.org/research-results/2017/developing-new-methods-comparing-treatments-case-control-studies

Institution Receiving Award: Harvard T.H. Chan School of Public Health
PCORI ID: ME-1609-36748

Suggested citation:

Danaei G, Rasouli B, Chubak J, et al. (2021). Developing New Methods for Comparing Treatments in Case-Control Studies. Patient-Centered Outcomes Research Institute (PCORI). https://doi.org/10.25302/07.2021.ME.160936748

Disclaimer

The [views, statements, opinions] presented in this report are solely the responsibility of the author(s) and do not necessarily represent the views of the Patient-Centered Outcomes Research Institute® (PCORI®), its Board of Governors or Methodology Committee.

Copyright © 2021. Harvard T.H. Chan School of Public Health. All Rights Reserved.

This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License which permits noncommercial use and distribution provided the original author(s) and source are credited. (See https://creativecommons.org/licenses/by-nc-nd/4.0/

Bookshelf ID: NBK604777PMID: 38976621DOI: 10.25302/07.2021.ME.160936748

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (3.1M)

Other titles in this collection

Related information

Similar articles in PubMed

See reviews...See all...

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...