Skip to main content
Advertisement
  • Loading metrics

Does Random Treatment Assignment Cause Harm to Research Participants?

  • Cary P Gross ,

    To whom correspondence should be addressed. E-mail: cary.gross@yale.edu

    Affiliations Section of General Internal Medicine, Yale University School of Medicine, New Haven, Connecticut, United States of America, Robert Wood Johnson Clinical Scholars Program, Yale School of Medicine, New Haven, Connecticut, United States of America

  • Harlan M Krumholz,

    Affiliations Section of General Internal Medicine and Yale Cancer Center, Yale University School of Medicine, New Haven, Connecticut, United States of America, Robert Wood Johnson Clinical Scholars Program, Yale School of Medicine, New Haven, Connecticut, United States of America, Section of Health Policy and Administration, Department of Epidemiology and Public Health, Yale University School of Medicine, New Haven, Connecticut, United States of America

  • Gretchen Van Wye,

    Affiliation Section of Chronic Disease Epidemiology, Department of Epidemiology and Public Health, Yale University School of Medicine, New Haven, Connecticut, United States of America

  • Ezekiel J Emanuel,

    Affiliation Department of Clinical Bioethics, Warren G. Magnuson Clinical Center, National Institutes of Health, Bethesda, Maryland, United States of America

  • David Wendler

    Affiliation Department of Clinical Bioethics, Warren G. Magnuson Clinical Center, National Institutes of Health, Bethesda, Maryland, United States of America

Abstract

Background

Some argue that by precluding individualized treatment, randomized clinical trials (RCTs) provide substandard medical care, while others claim that participation in clinical research is associated with improved patient outcomes. However, there are few data to assess the impact of random treatment assignment on RCT participants. We therefore performed a systematic review to quantify the differences in health outcomes between randomized trial participants and eligible non-participants.

Methods and Findings

Studies were identified by searching Medline, the Web of Science citation database, and manuscript references. Studies were eligible if they documented baseline characteristics and clinical outcomes of RCT participants and eligible non-participants, and allowed non-participants access to the same interventions available to trial participants. Primary study outcomes according to patient group (randomized trial participants versus eligible non-participants) were extracted from all eligible manuscripts. For 22 of the 25 studies (88%) meeting eligibility criteria, there were no significant differences in clinical outcomes between patients who received random assignment of treatment (RCT participants) and those who received individualized treatment assignment (eligible non-participants). In addition, there was no relation between random treatment assignment and clinical outcome in 15 of the 17 studies (88%) in which randomized and nonrandomized patients had similar health status at baseline.

Conclusions

These findings suggest that randomized treatment assignment as part of a clinical trial does not harm research participants.

Editors' Summary

Background.

When researchers test a new treatment, they give it to a group of patients. If the test is to be fair and provide useful results, there should also be a control group of patients who are studied in parallel. The patients in the control group receive either a different treatment, a pretend treatment (“a placebo”), or no treatment at all. But how do researchers decide who should be in the treatment group and who should be in the control group? This is an important question because the test would not be fair if, for example, all the individuals in the treatment group were elderly men and the controls were all young women, or if everyone in the treatment group received their treatment in a well-equipped specialist hospital and the controls received care in a local general hospital. Statisticians would say that the results from such studies were “confounded” by the differences between the two groups. Instead, patients should be allocated to treatment or control groups at random. Randomization also has the advantage that it can conceal from the researchers, and from the patients, whether the treatment being given is the new one or an old one or a placebo. This is important because—again for example—researchers might hold strong beliefs about the effectiveness of a new treatment and this bias in its favor might lead them, perhaps only subconsciously, to allocate younger, stronger patients to the treatment group. For these and other reasons, randomized clinical trials (RCTs) are regarded as the “gold standard” in assessing the effectiveness of treatments.

Why Was This Study Done?

Doctors normally decide on the “best” treatment for an individual patient based on their knowledge and experience. However, if a patient has agreed to be part of an RCT, then their treatment will instead be chosen at random. Some people worry that patients who participate in RCTs may, because their treatment is less “personalized,” have a lower chance of recovery from their illness than similar patients who are not in trials. In contrast, other argue that, particularly if the trial is part of an important research program, being in an RCT is to the patient's advantage. This study aimed to find out whether either of these possibilities is true.

What Did the Researchers Do and Find?

The researchers conducted a thorough electronic search of medical journals in order to find published RCTs for which information—both before and after treatment—had been recorded not only about the patients who were enrolled in the trials, but also about other patients whose condition made them eligible to participate but who were not actually enrolled. The researchers also decided in advance that they were only interested in such RCTs if the non-enrolled patients had access to the same treatment or treatments that were given to the trial participants. Only 25 RCTs were found that met these requirements. There were nearly 18,000 patients in these studies; overall 45% had received treatment after randomization and 55% had not been randomized. Most of the RCTs were for treatments for cancer, problems of the heart and circulation, and obstetric and gynecological issues. The “clinical outcomes” recorded in the trials varied and included, for example, death/survival, recurrence of cancer, and improvement of hearing. In 22 of these trials, there were no statistically significant differences in clinical outcomes between patients who received random assignment of treatment (i.e., the RCT participants) and those who received individualized treatment assignment (eligible non-participants). In one trial the randomized patients fared better, and in the remaining two the nonrandomized patients had the better outcomes.

What Do These Findings Mean?

These findings suggest that randomized treatment assignment as part of a clinical trial does not harm research participants, nor does there appear to be an advantage to being randomized in a trial.

Additional Information.

Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0030188.

•The James Lind Library has been created to help patients and researchers understand fair tests of treatments in health care by illustrating how fair tests have developed over the centuries

Wikipedia, a free Internet encyclopedia that anyone can edit, has pages on RCTs

Introduction

Despite widespread reliance on randomized clinical trials (RCTs), and claims that they represent the “gold standard” for assessing treatment efficacy, ethical concern has been raised about the impact of RCTs on participants [14]. Specifically, there is a perception that individual patients are likely to have better outcomes when treatment decisions are based on physicians' clinical judgment, rather than random assignment [1, 2]. It has been claimed that by foregoing individualized treatment assignment, the process of choosing research participants' treatments by random assignment leads to an “inevitable compromise of personal care in the service of obtaining valid research results” [1]. Further, physician and patient concerns about random treatment assignment are among the most frequently cited reasons for refusal to enroll in RCTs [58].

While some commentators focus on the specific impact of random treatment assignment, others have investigated the broader topic of differences in clinical outcomes between research participants and “real” patients in the community setting. Some studies have suggested that research participation may be associated with improved clinical outcomes [914]. These data have led some to recommend trial participation as a means to better treatment [15]. For instance, the National Comprehensive Cancer Network's clinical practice guidelines in oncology state that “the best management for any cancer patient is in a clinical trial” [15]. Yet these conclusions are not based on strong evidence [16]. In particular, comparisons of research participants versus non-participants often include non-participants who do not meet trial eligibility criteria [16]. Because of stringent eligibility criteria, trial participants tend to be younger and healthier than non-participants in the community [1618]. Trial participation may also be the only means of access to some therapies: if the investigational therapy is available only in the research setting and turns out to be superior to existing therapies, trial participants who were allocated to the newer agent would be more likely to benefit. Further, the supportive clinical care that participants receive as part of research in resource-rich settings associated with some clinical trials may also be associated with superior outcomes. Recognizing these flaws in the existing data, a recent review of the literature called for more studies that assess the impact of participation in clinical research on patient outcomes in a methodologically rigorous manner [16].

It is particularly timely to disentangle the issues surrounding the effect of research participation on patients. There has recently been increased emphasis on designing trials that compare commercially available, clinically relevant alternatives [1921]. Some authors have advocated substantial increases in funding of “pragmatic” trials enrolling large numbers of patients in community practice settings [19]. Additionally, Medicare's policy has recently been modified so as to provide reimbursement for some new therapies only if patients receive them in the setting of a clinical trial [22]. Unlike in the previous paradigm, which viewed randomized trials as a tool to evaluate the efficacy of novel therapeutic agents, these innovations will likely result in many more patients encountering the decision of trial enrollment in the setting of routine clinical care. Prospective participants who are asked to participate in these pragmatic trials will have to decide whether to receive therapy that was selected via randomization or to select treatment with the input of their clinician.

Given the increased emphasis on recruiting large numbers of patients into trials, it is important to consider the question of enrollment from the perspective of patients who meet all eligibility criteria and are asked to enroll. If they agree to have their treatment selected at random, rather than by their clinicians or themselves, will they be more likely to experience adverse outcomes? We sought to answer this question by examining the potential risks associated with random treatment allocation, rather than delineating differences between trial participants and non-participants [16, 18]. While numerous studies have demonstrated the differences between trial participants and their counterparts in the community, few have focused specifically on the impact of random treatment assignment. Specifically, we were interested in the group of patients who were eligible for participation in an RCT but could also receive either of the therapies offered in the RCT even if they refused to enroll. We conducted a systematic review of published randomized controlled trials to compare the clinical outcomes of randomized patients and nonrandomized patients who were eligible for the same trial, were cared for in the same clinical setting, and received the same agents available to trial participants.

Methods

Selection of Studies

We conducted a Medline search to identify studies that (1) included only patients who were eligible for trial participation, (2) included only patients who were cared for at the same institutions and at the same time in which the randomized trial was recruiting, (3) allowed non-participants access to the agents used in the trial, (4) provided outcome data for both trial participants and eligible non-participants, and (5) recruited all participants in a similar manner.

The Medline search employed 23 unique combinations of terms and strings of terms (see Protocol S1). We focused a significant portion of our Medline search on identifying studies that met our definition of comprehensive cohort study design (see Protocol S1 for terms and phrases). The comprehensive cohort study design, also called the partially randomized patient preference trial design, offers eligible research participants the chance to refuse randomization but receive either the study intervention or the control intervention per study protocol [23]. In addition, we used the references of relevant manuscripts, authors' own bibliographic libraries, and Web of Science to identify frequently cited researchers and papers.

The Medline search identified 1,505 studies; the Web of Science search identified 371 studies. Of these 1,876 studies, the titles of 1,555 were identified by the two reviewers as potentially appropriate for inclusion in the current analysis. The abstracts of these 1,555 studies were assessed by two authors for appropriate content and relevant methodology. The full texts of 48 potentially suitable manuscripts were retrieved and assessed. Of these, 25 studies met the eligibility criteria.

Data Analysis

An explicit abstraction instrument was used to obtain baseline characteristics of the RCT participants and eligible non-participants and primary clinical outcomes. Outcomes were restricted to the primary outcome listed in each manuscript; if more than one primary outcome was specified, the first one listed was used. To compare outcomes across studies, all study outcomes were standardized to “adverse” outcomes, e.g., for studies that reported survival, we converted probability of survival to probability of death. Most of the studies had dichotomous outcomes that enabled the calculation of odds ratios; those that did not were analyzed separately. In the two studies in which outcomes were expressed only as rates rather than as frequency counts, the stated proportion of people in each group who experienced the study outcome was multiplied by the number at baseline to estimate the frequency [24, 25]. In one study, non-participants were able to select from three treatment options, only two of which were part of the RCT. For this study, we included data only from non-participants who received one of the two treatments that were part of the RCT [25].

Because the relation between trial participation and clinical outcomes might be confounded by differences in baseline health status, we categorized the studies into three mutually exclusive groups: those in which the RCT participants were, overall, less healthy than eligible non-participants at baseline, those in which there was no clear difference in baseline health status, and those in which RCT participants were, overall, healthier at baseline. Two clinicians, using an implicit schema involving examination of baseline clinical and demographic characteristics of randomized and nonrandomized patients, independently categorized each study according to whether there was a balance of important prognostic factors between groups. Disagreements were resolved by consensus.

The odds ratios of experiencing the primary clinical outcome for RCT participants versus eligible non-participants were calculated using SAS 8.1 [26]. A Breslow–Day chi-square statistic indicated that it would be inappropriate to aggregate the results of studies with dichotomous outcomes because of heterogeneity. Thus, the outcomes are presented simply by study, according to baseline differences.

Results

A total of 25 articles met the inclusion criteria and were selected for data abstraction. The dates of publication ranged from 1984 to 2002; the majority (80%) were published in 1990 or later. There was a broad range of conditions under investigation, and types of studies, including surgical trials, drug trials, and trials of counseling. The most common specialties represented were oncology (six studies), cardiovascular disease (five studies), and obstetrics/gynecology (five studies). The total number of eligible patients across all studies was 17,934 (range: 79 to 3,610); the proportion of eligible patients who agreed to be randomized ranged from 29% to 89% (average: 45 %; median: 47 %). The primary outcomes of interest varied across studies; the most common were mortality (9/25), acceptability of treatment (5/25), and proportion of time or number of days with a given condition (2/25).

Baseline Characteristics

Table 1 shows the study intervention and enrollment data for all 25 studies, categorized according to baseline clinical and sociodemographic characteristics. There were no clear differences in baseline health status between RCT participants and eligible non-participants in 17 studies. In one study, RCT participants were healthier than eligible non-participants at baseline, and in seven studies RCT participants were less healthy at baseline than eligible non-participants. There was no significant relation between the proportion of eligible patients who agreed to be randomized and the occurrence of differences in baseline health status of randomized versus nonrandomized patients. The mean proportion of eligible patients who agreed to be randomized in the seven studies categorized as “RCT patients less healthy” was 48.9%, while the mean in the 17 studies with no baseline differences was 43.5% ( p = 0.61).

Differences in clinical sociodemographic characteristics between groups also varied in magnitude and significance. For instance, in the Bypass Angioplasty Revascularization Investigation of angioplasty versus coronary artery bypass graft, RCT participants were significantly more likely than non-participants to have a history of myocardial infarction (55% versus 51%), heart failure (9% versus 5%), or diabetes (19% versus 17%) [24]. Significant differences in race were found in two studies: the study by Marcus and colleagues included more non-whites in the eligible, nonrandomized group (10% versus 24%; p = 0.008), and the Bypass Angioplasty Revascularization Investigation included more non-whites in the RCT group (10% versus 6%, p < 0.001) [24, 27, 28].

Outcomes

In 22 of the 25 studies (88%), there were no significant differences in clinical outcomes between patients whose treatment was selected by randomized allocation and those whose treatment was selected on the basis of clinical judgment and/or patient preferences (Table 2; Figure 1). There were no significant differences in clinical outcomes between randomized and nonrandomized patients in 15 of the 17 studies (88%) in which there were no clear baseline differences in health or sociodemographic status. Similarly, there were no significant differences in clinical outcomes between randomized and nonrandomized patients in six of the seven studies in which RCT participants were sicker than non-participants at baseline (86%; chi-square test, p > 0.05 for comparison with the “no clear baseline differences” group).

thumbnail
Table 2.

Clinical Outcome in Randomized and Nonrandomized Patients

https://doi.org/10.1371/journal.pmed.0030188.t002

thumbnail
Figure 1. Relative Risk of Experiencing Primary Outcome According to RCT Participation

Asterisks indicate statistical significance. The relevant references for the studies listed along the x-axis are as follows: AVID [50, 68], EAST [51], Cooper [52], BARI [24], Chilvers [53], Bain [54], CASS [55], Link [57], Blichert-Toft [30], Henshaw [58], Nicolaides [59], SMASH [63], Mosekilde [64], Kerry [67], Bijker [25], Melchart [29], and Antman [31].

https://doi.org/10.1371/journal.pmed.0030188.g001

In Feit et al.'s analysis of the data from the Bypass Angioplasty Revascularization Investigation [24], randomized patients were more likely to have risk factors for adverse outcomes at baseline: they were more likely to have congestive heart failure, prior myocardial infarction, or diabetes, and were more likely to be non-white and less educated. The 7-y mortality in the randomized group was 17.3%, compared with 14.5% in the nonrandomized group (relative risk: 1.19; 95% confidence interval [CI]: 1.03, 1.39) [24]. In Melchart et al.'s study of acupuncture versus midazolam as pretreatment for gastroscopy [29], there were no significant differences in baseline health status between randomized and nonrandomized groups. Randomized patients were more likely than nonrandomized patients to state that they would not undergo the same treatment again (34.6% versus 15.3%; relative risk: 2.27; 95% CI: 1.06, 4.84). Similarly, in Blichert-Toft's study of mastectomy versus breast-conserving surgery for breast cancer [30], randomized patients were more likely than nonrandomized patients to experience the outcome of cancer recurrence (13.7% versus 6.6%), although the difference was of borderline significance (relative risk: 2.08; 95% CI: 1.07, 4.02).

In the single study in which randomized patients were categorized as having a better baseline health status than nonrandomized patients, there was a nonsignificant trend towards the randomized patients being less likely to experience disease recurrence or death (odds ratio for randomized versus nonrandomized: 0.35; 95% CI: 0.12, 1.01) [31].

Discussion

When there are several treatment options available, and there is uncertainty about which one is superior, it is assumed that individualized treatment assignment—in which clinicians consider the health status and preferences of each patient and incorporate them into a recommendation—is more likely to yield desirable outcomes. This is why doctors don't flip coins, and this is also why some may assume that randomization as part of a trial is harmful. In 23 of the 25 published clinical trials that met inclusion criteria, there were no significant differences in the likelihood of experiencing the primary study outcomes between patients whose treatment was determined by random allocation versus those whose treatment was selected on the basis of clinical judgment and/or patient preferences. More importantly, in 15 of the 17 studies in which randomized and nonrandomized patients were classified as having similar health status at baseline, there were no significant differences between these groups in clinical outcomes. These data contradict the perception that random treatment assignment as part of a clinical trial is harmful to research participants.

The finding that randomized research participants and non-participants tend to achieve similar clinical outcomes also contradicts prior studies suggesting that trial participation may be associated with superior clinical outcomes [914]. Many of the previous studies that reported such a difference failed to account for the numerous differences between clinical care and clinical research that may influence patient outcomes, including the fact that research participants are often younger, healthier, and treated by clinicians with more experience in treating patients with the condition of interest. Specifically, we restricted the present analysis to studies that included only patients who were eligible for RCT participation and had access to similar treatments whether or not they chose to enroll in the RCT. Hence, while our study sample was therefore restricted to a relatively small subset of RCTs, our findings suggest that the purported benefit of trial participation is probably due to baseline differences between participants and non-participants, or to differences in treatments received.

All of the studies included in the present analysis allowed access to the experimental therapies to patients who refused trial enrollment. It is unclear whether our results can be generalized to randomized trials that include newer, and potentially more efficacious, therapies that are not available outside the research setting. However, a recent analysis found that only 36% of trials presented at an annual meeting of the American Society of Clinical Oncology yielded “positive” results [32]. These findings contradict the widespread assumption that access to experimental therapies is beneficial [3338]. Future work should explore whether participation in randomized trials of otherwise unavailable agents is associated with superior clinical outcomes.

While our comprehensive and systematic search identified far more manuscripts than prior reviews of this topic that we are aware of, our final sample size is small relative to the number of RCTs conducted annually [39]. As a result, although our findings were consistent across disease entities and different types of intervention, they may not be generalizable. As noted in prior reviews, many of the primary studies did not control for differences in baseline health characteristics [16, 39]. We used an implicit, dual review approach to account for this potential bias, stratifying manuscripts according to baseline differences between trial participants and non-participants. Ideally, future work employing primary data would enable multivariate analysis of patient-level information, to account for important patient characteristics that may affect patient outcomes. The increasing use of electronic medical records represents a tremendous opportunity for establishing longitudinal registry databases to facilitate follow-up of patients who are offered trial enrollment, yet decline.

Our results should be interpreted with several considerations in mind. We restricted our analysis to the primary outcomes assessed in the included studies. In particular, many studies assessed the outcome of mortality, and there may have been differences in the probability of other adverse events, satisfaction, or quality of life between RCT participants and non-participants. Similarly, clinical trials may include additional research procedures, such as blood draws and lumbar punctures that do not affect patient outcomes but that pose burdens to participants. Additionally, random assignment refers only to the investigational agent. Even among RCT participants, clinician-investigators generally have some latitude regarding other aspects of care that are administered to their patients and can therefore provide individualized care that consists of interventions that are distinct from the investigational agent. Similarly, clinicians may halt existing treatment for patients who are offered a choice of enrolling in a study. In these instances, if a patient is provided one of the treatment interventions offered in the study—whether selected with randomization or by patient choice—it is possible that the initial treatment may have been superior to either of the treatments under investigation. Further, publication bias might have yielded underestimates of differences between RCT participants and eligible non-participants, as investigators may have been reluctant to report data from the non-participants in their registries if they did not support the generalizability of their RCTs. Finally, there may have been important differences in health status between randomized and nonrandomized patients that were not reported by the investigators. However, given that the vast majority of the study samples in our sample found no difference in health outcome between groups, one would have to invoke a systematic over- or underestimation of health status in the randomized groups across multiple studies in order to instill bias in this synthesis.

Numerous studies indicate that RCT participants often fail to understand that their treatments will be determined by random assignment [18, 4042]. For example, a recent analysis found that half of parents who decided whether to enroll their children in a leukemia trial did not understand that treatment allocation would be determined by chance [18]. The failure to understand randomization is often regarded as part of a broader phenomenon, termed the “therapeutic misconception,” according to which individuals assume that research treatments are based on physicians' decisions regarding what is best for them [1, 43]. In this context, our findings have important implications for the informed consent process. In addition to explaining randomization, investigators should also explain that, in general, there is little evidence to support that participating in randomized trials is either helpful or harmful.

What do our findings say about the impact of clinical judgment and patient preferences on clinical outcomes? Although clinicians and patients may be reluctant to forego clinical decision-making, our data suggest that undergoing randomization, rather than individualized treatment recommendations by clinicians, is not harmful. This conclusion calls into question clinicians' ability to determine which therapy is superior for their patients in the setting of clinical equipoise, i.e., when there is uncertainty in the expert community about which treatment is superior for patients in general [44]. It has also been suggested that some patients who are not randomly assigned to a treatment may achieve a better outcome not because of an objective therapeutic effect, but because they were assigned to the treatment arm they preferred—a logical extension of the placebo effect [45]. To account for this possible “preference effect,” some have called for incorporating patient treatment preferences into the analysis phase of RCTs [45]. Our data provide preliminary evidence that this preference effect does not bias the outcomes of RCTs: patients who received a treatment preferred by themselves or their clinicians did not experience superior outcomes. These findings are consistent with the result of a recent review in which the authors stratified patients according to treatment received and then compared the outcome of patients who were randomized versus those who selected each therapy [46].

A critical barrier to enrolling patients in research studies is the fact that many patients are not even asked to participate [47]. One reason why physicians are reluctant to recruit their own patients is their reluctance to forego individualized treatment decisions for their patients [7, 48]. This reluctance is especially important because physician recommendations are among the strongest predictors of trial enrollment [49]. The current findings suggest that in the setting of clinical equipoise, randomized treatment allocation as part of an RCT is unlikely to be harmful This does not imply that all research is not risky, as the risks and benefits of experimental treatment may vary substantially between studies. However, in the situation in which patients will have access to the treatments that are used in the study setting regardless of whether the patient enrolls, prospective participants and their referring physicians should be reassured: there is no evidence that random treatment assignment leads to worse clinical outcomes. Furthermore, patients who do participate in such research can contribute to the important objective of improving health and well-being for all patients.

Supporting Information

Protocol S1. Literature Search Keywords and Results

https://doi.org/10.1371/journal.pmed.0030188.sd001

(67 KB DOC)

Acknowledgments

The authors would like to acknowledge Drs. Frank Miller and Stephen Straus for their thoughtful comments. The views expressed are the authors' own. They do not represent the position or policy of the National Institutes of Health or the Department of Health and Human Services.

Author Contributions

CPG, EJE, and DW designed the study. GVW abstracted data from articles. CPG, GVW, EJE, and DW analyzed the data. CPG, HMK, GVW, EJE, and DW contributed to writing the paper.

References

  1. 1. Appelbaum PS, Roth LH, Lidz CW, Benson P, Winslade W (1987) False hopes and best data: Consent to research and the therapeutic misconception. Hastings Cent Rep 17: 20–24.
  2. 2. Taylor KM, Margolese RG, Soskolne CL (1984) Physicians' reasons for not entering eligible patients in a randomized clinical trial of surgery for breast cancer. N Engl J Med 310: 1363–1367.
  3. 3. Feinstein AR (1984) Current problems and future challenges in randomized clinical trials. Circulation 70: 767–774.
  4. 4. Abel U, Koch A (1999) The role of randomization in clinical studies: Myths and beliefs. J Clin Epidemiol 52: 487–497.
  5. 5. Kemeny MM, Peterson BL, Kornblith AB, Muss HB, Wheeler J, et al. (2003) Barriers to clinical trial participation by older women with breast cancer. J Clin Oncol 21: 2268–2275.
  6. 6. Jenkins V, Fallowfield L (2000) Reasons for accepting or declining to participate in randomized clinical trials for cancer therapy. Br J Cancer 82: 1783–1788.
  7. 7. Fallowfield L, Ratcliffe D, Souhami R (1997) Clinicians' attitudes to clinical trials of cancer therapy. Eur J Cancer 33: 2221–2229.
  8. 8. Taylor K, Feldstein M, Skeel R, Pandya K, Carbone P (1994) Fundamental dilemmas of the randomized clinical trial process: Results of a survey of 1737 Eastern Cooperative Oncology Group investigators. J Clin Oncol 12: 1796–1805.
  9. 9. Daugherty C, Ratain MJ, Grochowski E, Stocking C, Kodish E, et al. (1995) Perceptions of cancer patients and their physicians involved in phase I trials. J Clin Oncol 13: 1062–1072.
  10. 10. Joffe S, Weeks JC (2002) Views of American oncologists about the purposes of clinical trials. J Natl Cancer Inst 94: 1847–1853.
  11. 11. Yuval R, Halon DA, Merdler A, Khader N, Karkabi B, et al. (2000) Patient comprehension and reaction to participating in a double-blind randomized clinical trial (ISIS-4) in acute myocardial infarction. Arch Intern Med 160: 1142–1146.
  12. 12. Karjalainen S, Palva I (1989) Do treatment protocols improve end results? A study of survival of patients with multiple myeloma in Finland. BMJ 299: 1069–1072.
  13. 13. Davis S, Wright P, Schulman S, Hill L, Pikham R, et al. (1985) Participants in prospective randomized clinical trials for resected non-small lung cancer have improved survival compared with non-participants in such trials. Cancer 56: 1710–1718.
  14. 14. Marubini E, Mariani L, Salvadori B, Veronesi U, Saccozzi R, et al. (1996) Results of a breast-cancer-surgery trial compared with observational data from routine practice. Lancet 347: 1000–1003.
  15. 15. National Comprehensive Cancer Network (2006) NCCN clinical practice guidelines in oncology. Jenkintown (Pennsylvania): National Comprehensive Cancer Network. Available: http://www.nccn.org/professionals/physician_gls/f_guidelines.asp. Accessed 4 April 2006.
  16. 16. Peppercorn J, Weeks JC, Cook EF, Joffe S (2004) Comparison of outcomes in cancer patients treated within and outside clinical trials: Conceptual framework and structured review. Lancet 363: 263–270.
  17. 17. Heiat A, Gross CP, Krumholz HM (2002) Representation of the elderly, women, and minorities in heart failure clinical trials. Arch Intern Med 162: 1682–1688.
  18. 18. Kodish E, Eder M, Noll RB, Ruccione K, Lange B, et al. (2004) Communication of Randomization in Childhood Leukemia Trials. JAMA 291: 470–475.
  19. 19. Tunis SR, Stryer DB, Clancy CM (2003) Practical clinical trials: Increasing the value of clinical research for decision making in clinical and health policy. JAMA 290: 1624–1632.
  20. 20. ALLHAT Officers and Coordinators for the ALLHAT Collaborative Research Group (2002) Major outcomes in high-risk hypertensive patients randomized to angiotensin-converting enzyme inhibitor or calcium channel blocker vs diuretic: The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT). JAMA 288: 2981–2997.
  21. 21. Cannon CP, McCabe CH, Belder R, Breen J, Braunwald E (2002) Design of the Pravastatin or Atorvastatin Evaluation and Infection Therapy (PROVE IT)-TIMI 22 trial. Am J Cardiol 89: 860–861.
  22. 22. Kolata G (2004) Medicare covering new treatments, but with a catch. New York Times; Sect A: 1.
  23. 23. Olschewski M, Scheurlen H (1985) Comprehensive cohort study: An alternative to randomised consent. Methods Inf Med 24: 131–134.
  24. 24. Feit F, Brooks M, Sopko G, Keller N, Rosen A, et al. (2000) Long-term clinical outcome in the Bypass Angioplasty Revascularization Investigation Registry. Circulation 101: 2795–2802.
  25. 25. Bijker N, Peterse JL, Fentiman IS, Julien JP, Hart AA, et al. (2002) Effects of patient selection on the applicability of results from a randomised clinical trial (EORTC 10853) investigating breast-conserving therapy for DCIS. Br J Cancer 87: 615–620.
  26. 26. SAS Institute (2000) SAS/STAT, version 8.1 [computer program]. Cary (North Carolina): SAS Institute.
  27. 27. Marcus S (1997) Assessing non-consent bias with parallel randomized and nonrandomized clinical trials. J Clin Epidemiol 50: 823–828.
  28. 28. Paradise JL, Bluestone CD, Rogers KD, Taylor FH, Colborn DK, et al. (1990) Efficacy of adenoidectomy for recurrent otitis media in children previously treated with tympanostomy-tube placement. Results of parallel randomized and nonrandomized trials. JAMA 263: 2066–2073.
  29. 29. Melchart D, Steger HG, Linde K, Makarian K, Hatahet Z, et al. (2002) Integrating patient preferences in clinical trials: A pilot study of acupuncture versus midazolam for gastroscopy. J Altern Complement Med 8: 265–274.
  30. 30. Blichert-Toft M, Brincker H, Andersen JA, Andersen KW, Axelsson CK, et al. (1988) A Danish randomized trial comparing breast-preserving therapy with mastectomy in mammary carcinoma. Preliminary results. Acta Oncol 27: 671–677.
  31. 31. Antman K, Amato D, Wood W, Carson J, Suit H, et al. (1985) Selection bias in clinical trials. J Clin Oncol 3: 1142–1147.
  32. 32. Krzyzanowska MK, Pintilie M, Tannock IF (2003) Factors associated with failure to publish large randomized trials presented at an oncology meeting. JAMA 290: 495–501.
  33. 33. Weijer C (1999) Selecting subjects for participation in clinical research: One sphere of justice. J Med Ethics 25: 31–36.
  34. 34. United States Public Law 103–431993 NIH Revitalization Act of 1993, Subtitle B, Section 610131–133.
  35. 35. National Cancer Institute (2003) Age alone should not prevent older patients from enrolling in clinical trials. Bethesda (Maryland): National Cancer Institute. Available: http://www.cancer.gov/clinicaltrials/developments/age-as-barrier1005. Accessed 20 April 2006 .
  36. 36. Stallings FL, Ford ME, Simpson NK, Fouad M, Jernigan JC, et al. (2000) Black participation in the Prostate, Lung, Colorectal and Ovarian (PLCO) Cancer Screening Trial. Control Clin Trials 21: 379S–389S.
  37. 37. Kolata G, Eichenwald (1999) Group of insurers to pay for experimental cancer therapy. New York Times; Sect C: 1, 9.
  38. 38. ECRI2002 Feb. Should I enter a clinical trial? A patient reference guide for adults with a serious or life-threatening illness. Plymouth Meeting (Pennsylvania): ECRI. Available: http://www.ecri.org/Patient_Information/Patient_Reference_Guide/prg.pdf. Accessed 4 April 2006 .
  39. 39. Braunholtz DA, Edwards SJ, Lilford RJ (2001) Are randomized clinical trials good for us (in the short term)? Evidence for a “trial effect”. J Clin Epidemiol 54: 217–224.
  40. 40. Joffe S, Cook EF, Cleary PD, Clark JW, Weeks JC (2001) Quality of informed consent in cancer clinical trials: A cross-sectional survey. Lancet 358: 1772–1777.
  41. 41. Taub HA, Baker MT, Sturr JF (1986) Informed consent for research. Effects of readability, patient age, and education. J Am Geriatr Soc 34: 601–606.
  42. 42. Dunn LB, Lindamer LA, Palmer BW, Golshan S, Schneiderman LJ, et al. (2002) Improving understanding of research consent in middle-aged and elderly patients with psychotic disorders. Am J Geriatr Psychiatry 10: 142–150.
  43. 43. Lidz CW, Appelbaum PS, Grisso T, Renaud M (2004) Therapeutic misconception and the appreciation of risks in clinical trials. Soc Sci Med 58: 1689–1697.
  44. 44. Sreenivasan G (2003) Does informed consent to research require comprehension? Lancet 362: 2016–2018.
  45. 45. McPherson K, Britton A (1999) The impact of patient preferences on the interpretation of randomised controlled trials. Eur J Cancer 35: 1598–1602.
  46. 46. King M, Nazareth I, Lampe F, Bower P, Chandler M, et al. (2005) Impact of participant and physician intervention preferences on randomized trials: A systematic review. JAMA 293: 1089–1099.
  47. 47. Wendler D, Kington R, Madans J, Wye GV, Christ-Schmidt H, et al. (2006) Are racial and ethnic minorities less willing to participate in health research? PLoS Med 3: e19.
  48. 48. Fleming I (1990) Clinical trials for cancer patients: The community practicing physician's perspective. Cancer 65: 2388–2390.
  49. 49. Foley J, Moertel C (1991) Improving accrual into cancer clinical trials. J Cancer Educ 6: 165–173.
  50. 50. Hallstrom A, Friedman L, Denes P, Rizo-Patron C, Morris M (2003) Do arrhythmia patients improve survival by participating in randomized clinical trials? Observations from the Cardiac Arrhythmia Suppression Trial (CAST) and the Antiarrhythmics Versus Implantable Defibrillators Trial (AVID). Control Clin Trials 24: 341–352.
  51. 51. King SB, Barnhart HX, Kosinski AS, Weintraub WS, Lembo NJ, et al. (1997) Angioplasty or surgery for multivessel coronary artery disease: Comparison of eligible registry and randomized patients in the EAST trial and influence of treatment selection on outcomes. Am J Cardiol 79: 1453–1459.
  52. 52. Cooper KG, Grant AM, Garratt AM (1997) The impact of using a partially randomised patient preference design when evaluating alternative managements for heavy menstrual bleeding. Br J Obstet Gynaecol 104: 1367–1373.
  53. 53. Chilvers C, Dewey M, Fielding K, Gretton V, Miller P, et al. (2001) Antidepressant drugs and generic counselling for treatment of major depression in primary care: Randomised trial with patient preference arms. BMJ 322: 772–775.
  54. 54. Bain C, Cooper KG, Parkin DE (2001) A partially randomized patient preference trial of microwave endometrial ablation using local anaesthesia and intravenous sedation or general anaesthesia: A pilot study. Gynaecol Endosc 10: 223–228.
  55. 55. CASS Principal Investigators and their associates (1984) Coronary Artery Surgery Study (CASS): A randomized trial of coronary artery bypass surgery. Comparability of entry characteristics and survival in randomized patients and nonrandomized patients meeting randomization criteria. J Am Coll Cardiol 3: 114–128.
  56. 56. Paradise JL, Bluestone CD, Bachman RZ, Colborn DK, Bernard BS, et al. (1984) Efficacy of tonsillectomy for recurrent throat infection in severely affected children. Results of parallel randomized and nonrandomized clinical trials. N Engl J Med 310: 674–683.
  57. 57. Link MP, Goorin AM, Miser AW, Green AA, Pratt CB, et al. (1986) The effect of adjuvant chemotherapy on relapse-free survival in patients with osteosarcoma of the extremity. N Engl J Med 314: 1600–1606.
  58. 58. Henshaw RC, Naji SA, Russell IT, Templeton AA (1993) Comparison of medical abortion with surgical vacuum aspiration: Women's preferences and acceptability of treatment. BMJ 307: 714–717.
  59. 59. Nicolaides K, Brizot Mde L, Patel F, Snijders R (1994) Comparison of chorionic villus sampling and amniocentesis for fetal karyotyping at 10–13 weeks' gestation. Lancet 344: 435–439.
  60. 60. McKay JR, Alterman AI, McLellan AT, Snider EC, O'Brien CP (1995) Effect of random versus nonrandom assignment in a comparison of inpatient and day hospital rehabilitation for male alcoholics. J Consult Clin Psychol 63: 70–78.
  61. 61. Schmoor C, Olschewski M, Schumacher M (1996) Randomized and non-randomized patients in clinical trials: Experiences with comprehensive cohort studies. Stat Med 15: 263–271.
  62. 62. de C Williams AC, Nicholas MK, Richardson PH, Pither CE, Fernandes J (1999) Generalizing from a controlled trial: The effects of patient preference versus randomization on the outcome of inpatient versus outpatient chronic pain management. Pain 83: 57–65.
  63. 63. Urban P, Stauffer JC, Bleed D, Khatchatrian N, Amann W, et al. (1999) A randomized evaluation of early revascularization to treat shock complicating acute myocardial infarction. The (Swiss) Multicenter Trial of Angioplasty for Shock—(S)MASH. Eur Heart J 20: 1030–1038.
  64. 64. Mosekilde L, Beck-Nielsen H, Sorensen OH, Nielsen SP, Charles P, et al. (2000) Hormonal replacement therapy reduces forearm fracture incidence in recent postmenopausal women—Results of the Danish Osteoporosis Prevention Study. Maturitas 36: 181–193.
  65. 65. Rovers MM, Straatman H, Ingels K, van der Wilt GJ, van den Broek P, et al. (2001) Generalizability of trial results based on randomized versus nonrandomized allocation of OME infants to ventilation tubes or watchful waiting. J Clin Epidemiol 54: 789–794.
  66. 66. Wieringa–de Waard M, Vos J, Bonsel GJ, Bindels PJ, Ankum WM (2002) Management of miscarriage: A randomized controlled trial of expectant management versus surgical evacuation. Hum Reprod 17: 2445–2450.
  67. 67. Kerry S, Hilton S, Dundas D, Rink E, Oakeshott P (2002) Radiography for low back pain: A randomised controlled trial and observational study in primary care. Bri J Gen Pract 52: 469–474.
  68. 68. Kim SG, Hallstrom A, Love JC, Rosenberg Y, Powell J, et al. (1997) Comparison of clinical characteristics and frequency of implantable defibrillator use between randomized patients in the Antiarrhythmics Vs Implantable Defibrillators (AVID) trial and nonrandomized registry patients. Am J Cardiol 80: 454–457.