Skip to main content

Main menu

  • Home
  • Content
    • Current issue
    • Past issues
    • Early releases
    • Collections
    • Sections
    • Blog
    • Infographics & illustrations
    • Podcasts
    • COVID-19 Articles
    • Obituary notices
  • Authors & Reviewers
    • Overview for authors
    • Submission guidelines
    • Submit a manuscript
    • Forms
    • Editorial process
    • Editorial policies
    • Peer review process
    • Publication fees
    • Reprint requests
    • Open access
    • Patient engagement
  • Members & Subscribers
    • Benefits for CMA Members
    • CPD Credits for Members
    • Subscribe to CMAJ Print
    • Subscription Prices
    • Obituary notices
  • Alerts
    • Email alerts
    • RSS
  • JAMC
    • À propos
    • Numéro en cours
    • Archives
    • Sections
    • Abonnement
    • Alertes
    • Trousse média 2023
    • Avis de décès
  • CMAJ JOURNALS
    • CMAJ Open
    • CJS
    • JAMC
    • JPN

User menu

Search

  • Advanced search
CMAJ
  • CMAJ JOURNALS
    • CMAJ Open
    • CJS
    • JAMC
    • JPN
CMAJ

Advanced Search

  • Home
  • Content
    • Current issue
    • Past issues
    • Early releases
    • Collections
    • Sections
    • Blog
    • Infographics & illustrations
    • Podcasts
    • COVID-19 Articles
    • Obituary notices
  • Authors & Reviewers
    • Overview for authors
    • Submission guidelines
    • Submit a manuscript
    • Forms
    • Editorial process
    • Editorial policies
    • Peer review process
    • Publication fees
    • Reprint requests
    • Open access
    • Patient engagement
  • Members & Subscribers
    • Benefits for CMA Members
    • CPD Credits for Members
    • Subscribe to CMAJ Print
    • Subscription Prices
    • Obituary notices
  • Alerts
    • Email alerts
    • RSS
  • JAMC
    • À propos
    • Numéro en cours
    • Archives
    • Sections
    • Abonnement
    • Alertes
    • Trousse média 2023
    • Avis de décès
  • Visit CMAJ on Facebook
  • Follow CMAJ on Twitter
  • Follow CMAJ on Pinterest
  • Follow CMAJ on Youtube
  • Follow CMAJ on Instagram
Analysis

A pragmatic–explanatory continuum indicator summary (PRECIS): a tool to help trial designers

Kevin E. Thorpe, Merrick Zwarenstein, Andrew D. Oxman, Shaun Treweek, Curt D. Furberg, Douglas G. Altman, Sean Tunis, Eduardo Bergel, Ian Harvey, David J. Magid and Kalipso Chalkidou
CMAJ May 12, 2009 180 (10) E47-E57; DOI: https://doi.org/10.1503/cmaj.090523
Kevin E. Thorpe
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Merrick Zwarenstein
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Andrew D. Oxman
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Shaun Treweek
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Curt D. Furberg
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Douglas G. Altman
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sean Tunis
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Eduardo Bergel
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Ian Harvey
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
David J. Magid
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Kalipso Chalkidou
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Tables
  • Responses
  • Metrics
  • PDF
Loading

Randomized trials have traditionally been broadly categorized as either an effectiveness trial or an efficacy trial, although we prefer the terms “pragmatic” and “explanatory.” Schwartz and Lellouch described these 2 approaches toward clinical trials in 1967. 1 These authors coined the term “pragmatic” to describe trials that help users choose between options for care, and “explanatory” to describe trials that test causal research hypotheses (i.e., that a given intervention causes a particular benefit).

We take the view that, in general, pragmatic trials are primarily designed to determine the effects of an intervention under the usual conditions in which it will be applied, whereas explanatory trials are primarily designed to determine the effects of an intervention under ideal circumstances. 2 Thus, these terms refer to a trial’s purpose and, in turn, structure. The degree to which this purpose is met depends on decisions about how the trial is designed and, ultimately, conducted.

Very few trials are purely pragmatic or explanatory. For example, in an otherwise explanatory trial, there may be some aspect of the intervention that is beyond the investigator’s control. Similarly, the act of conducting an otherwise pragmatic trial may impose some control resulting in the setting being not quite usual. For example, the very act of collecting data required for a trial that would not otherwise be collected in usual practice could be a sufficient trigger to modify participant behaviour in unanticipated ways. Further, several aspects of a trial are relevant, relating to choices of trial participants, health care practitioners, interventions, adherence to protocol and analysis. Thus, we are left with a multidimensional continuum rather than a dichotomy, and a particular trial may display varying levels of pragmatism across these dimensions.

In this article, we describe an effort to develop a tool to assess and display the position of any given trial within the pragmatic–explanatory continuum. The primary aim of this tool is to help trialists assess the degree to which design decisions align with the trial’s stated purpose (decision-making v. explanation). Our tool differs, therefore, from that of Gart-lehner and associates 3 in that it is intended to inform trial design rather than provide a method of classifying trials for the purpose of systematic reviews. It can, however, also be used by research funders, ethics committees, trial registers and journal editors to make the same assessment, provided trialists declare their intended purpose and adequately report their design decisions. Hence, reporting of pragmatic trials is addressed elsewhere. 4

Ten ways in which pragmatic and explanatory trials can differ

Trialists need to make design decisions in 10 domains that determine the extent to which a trial is pragmatic or explanatory. Explanatory randomized trials that seek to answer the question “Can this intervention work under ideal conditions?” address these 10 domains with a view to maximizing whatever favourable effects an intervention might possess. 2 Table 1 illustrates how an explanatory trial, in its most extreme form, might approach these 10 domains.

View this table:
  • View inline
  • View popup
  • Download powerpoint

Table 1: PRECIS domains illustrating the extremes of explanatory and pragmatic approaches to each domain

Pragmatic randomized trials that seek to answer the question “Does this intervention work under usual conditions?” 5,6 address these 10 domains in different ways when there are important differences between usual and ideal conditions. Table 1 illustrates the most extreme pragmatic response to these domains.

The design choices for a trial intended to inform a research decision about the benefit of a new drug are likely to be more explanatory (reflecting ideal conditions). Those for a later trial of the same drug intended to inform practical decisions by clinicians or policy-makers are likely to be more pragmatic (reflecting usual conditions). When planning their trial, trialists should consider whether a trial’s design matches the needs of those who will use the results. A tool to locate trial design choices within the pragmatic–explanatory continuum could facilitate these design decisions, help to ensure that the choices that are made reflect the intended purpose of the trial, and help others to appraise the extent to which a trial is appropriately designed for its intended purpose.

Such a tool could, for example, expose potential inconsistencies, such as the use of intensive adherence monitoring and intervention (explanatory tactics) in a trial being designed to answer a more pragmatic question. Alternatively, a trial might include a wide range of participants and meaningfully assess the impact (pragmatic tactics) but evaluate an intervention that is enforced or tightly monitored (explanatory tactics) and thus not widely feasible. By supporting the identification of potential inconsistencies such as these, a pragmatic–explanatory indicator could improve the extent to which trial designs are fit for purpose by highlighting design choices that do not support the needs of the intended users of the trial’s results. In this article we introduce such a tool.

The pragmatic–explanatory distinction comprises a continuous spectrum, not an either/or dichotomy of the extremes, as illustrated in Table 1. Moreover, it is probably impossible ever to perform a “purely” explanatory or “purely” pragmatic trial. For example, no patients are perpetually compliant, and the hand of the most skilled surgeon occasionally slips, so there can never be a “pure” explanatory trial. Similarly, a “pure” pragmatic trial loses its purity as soon as its first eligible patient refuses to be randomized.

Development of the PRECIS tool

The proposal for the pragmatic–explanatory continuum indicator summary (PRECIS) was developed by an international group of interested trialists at 2 meetings in Toronto (2005 and 2008) and in the time between. The initiative grew from the Pragmatic Randomized Controlled Trials in HealthCare (Practihc) project (www.practihc.org), an initiative funded by Canada and the European Union to promote pragmatic trials in low- and middle-income countries.

The development of the PRECIS indicator began with the identification of key domains that distinguish pragmatic from explanatory trials. As illustrated in Table 1, they comprise:

  • The eligibility criteria for trial participants.

  • The flexibility with which the experimental intervention is applied.

  • The degree of practitioner expertise in applying and monitoring the experimental intervention.

  • The flexibility with which the comparison intervention is applied.

  • The degree of practitioner expertise in applying and monitoring the comparison intervention.

  • The intensity of follow-up of trial participants.

  • The nature of the trial’s primary outcome.

  • The intensity of measuring participants’ compliance with the prescribed intervention, and whether compliance-improving strategies are used.

  • The intensity of measuring practitioners’ adherence to the study protocol, and whether adherence-improving strategies are used.

  • The specification and scope of the analysis of the primary outcome.

During the 2005 meeting, 8 domains emerged during a brainstorming session. Furthermore, 5 mutually exclusive definitions were used to assign the level of pragmatism in each domain. Attempts to use the initial tool on a number of published trials revealed some difficulties. The mutually exclusive categories were technically difficult to understand and use, and in some cases contradictory among domains. The current approach, for the most part, is to consider a number of design tactics or restrictions consistent with an explanatory trial in each domain. The more tactics that are present, the more explanatory is the trial. However, these design tactics and restrictions (see “The domains in detail” section for some examples) are not equally important, so it is not a simple matter of adding up tactics. Where exactly to place a trial on the pragmatic–explanatory continuum is, therefore, a judgment best made by trialists discussing these issues at the design stage of their trial and reaching consensus. Initially, the domains for intervention flexibility and practitioner expertise addressed both the experimental and comparison interventions. Discussions at the 2008 meeting led to the separation of experimental and comparison interventions into their own domains and the replacement of a domain regarding trial duration with the domain related to the nature of the primary outcome.

At this point, a brief explanation of our use of some terminology may be helpful. In this paper, we view a trial participant as the recipient of the intervention. In many trials, the participants are patients. However, in a trial of a continuing education intervention, for example, the participants may be physicians. By practitioner we mean the person delivering the intervention. Again, for many trials the practitioners are physicians. For a continuing education intervention, the practitioners may be trained instructors.

We defined the purpose of a pragmatic trial as answering the question “Does an intervention work under usual conditions?,” where we take “usual conditions” to mean the same as, or very similar to, the usual-care setting. Characterizing the pragmatic extreme of each domain is less straight forward, since what is considered “usual care” may depend on context. For some interventions, what is usual for each domain may vary across different settings. For example, the responsiveness and compliance of patients, adherence of practitioners to guidelines, and the training and experience of practitioners may be different in different settings. Thus, characterizing the pragmatic extreme requires specifying the settings for which a trial is intended to provide an answer. Occasionally a pragmatic trial addresses a question in a single specific setting. For example, a randomized trial of interventions to improve the use of active sick leave was designed to answer a pragmatic question under usual conditions specific to the Norwegian context, where active sick leave was being promoted as a public sickness benefit scheme offered to promote early return to modified work for temporarily disabled workers. 7 More often pragmatic trials will address questions across specific types of settings or across a wide range of settings. Examples of specific types of settings include settings where chloroquine-resistant falciparum malaria is endemic, where hospital facilities are in close proximity, or where trained specialists are available.

Conversely, we defined the purpose of an explanatory trial as answering the question “Can an intervention work under ideal conditions?” Given this definition, characterizing the explanatory extreme of each domain is relatively straightforward and intuitive. It simply requires considering the design decisions one would make in order to maximize the chances of success. Thus, for example, one would select patients that are most likely to comply and respond to the intervention, ensure that the intervention is delivered in a way that optimizes its potential for beneficial effects, and ensure that it is delivered by well-trained and experienced practitioners.

Thus, we recommend that trialists or others assessing whether design decisions are fit for purpose do this in 4 steps:

  1. Declare whether the purpose of the trial is pragmatic or explanatory.

  2. Specify the settings or conditions for which the trial is intended to be applicable.

  3. Specify the design options at the pragmatic and explanatory extremes of each domain.

  4. Decide how pragmatic or explanatory a trial is in relation to those extremes for each domain.

For some trials, there may not be any important difference between the pragmatic and explanatory extremes for some dimensions. For example, delivering an intervention, such as acetylsalicylic acid (ASA) therapy to someone with an acute myocardial infarction, does not require practitioner expertise. As mentioned earlier, for domains where the extremes are clear, it should not be difficult to decide whether a design decision is at one extreme or the other. For design decisions that are somewhere in between the extremes, it can be more challenging to determine how pragmatic or explanatory a trial will be. For this reason we recommend that all the members of the trial design team rate each domain and compare.

To facilitate steps 3 and 4, we have identified a number of design tactics that either add restrictions typical of explanatory trials or remove restrictions in the fashion of pragmatism. The tactics that we describe here are not intended to be prescriptive, exhaustive or even ordered in a particular way, but rather illustrative. They are to aid trialists or others in assessing where, within the pragmatic–explanatory continuum, a domain is, allowing them to put a “tick” on a line representing the continuum. To display the “results” of this assessment, the lines for each domain are arranged like spokes of a wheel, with the explanatory pole near the hub and the pragmatic pole on the rim (Figure 1). The display is completed by joining the locations of all 10 indicators as we progress around the wheel.

Figure
  • Download figure
  • Open in new tab
  • Download powerpoint

Figure 1: The blank “wheel” of the pragmatic–explanatory continuum indicator summary (PRECIS) tool. “E” represents the “explanatory” end of the pragmatic–explanatory continuum.

The proposed scales seem to make sense intuitively and can be used without special training. Although we recognize that alternative graphical displays are possible, we feel that the proposed wheel plot is an appealing summary and is informative in at least 3 ways.

First, it depicts whether a trial is tending to take a broad view (as in a pragmatic trial asking whether an intervention does work, under usual conditions) or tending to be narrowly “focused” near the hub (as for an explanatory trial asking whether an intervention can work, under ideal conditions).

Second, the wheel plot highlights inconsistencies in how the 10 domains will be managed in a trial. For example, if a trial is to admit all patients and practitioners (extremely pragmatic) yet will intensely monitor compliance and intervene when it falters (extremely explanatory), a single glance at the wheel will immediately identify this inconsistency. This allows the researcher to make adjustments in the design, if possible and appropriate, to obtain greater consistency with their objective in conducting the trial.

Third, the wheel plot can help trialists better report any limitations in interpretation or generalization resulting from design inconsistencies. This could help users of the trial results make better decisions.

The domains in detail

Participant eligibility criteria

The most extremely pragmatic approach to eligibility would seek only to identify study participants with the condition of interest from as many sources (e.g., institutions) as possible. As one moves toward a more explanatory attitude, additional restrictions will be placed on the study population. These restrictions include the following:

  • Excluding participants not known or shown to be highly compliant to the interventions under study.

  • Excluding participants not known or shown to be at high risk for the primary trial outcome.

  • Excluding participants not expected to be highly responsive to the experimental intervention.

  • Using a small number of sources (or even 1) for participants.

The first 3 restrictions noted above are typically achieved by applying various exclusion criteria to filter out participants thought least likely to respond to the intervention. So, explanatory trials tend to have more exclusion criteria than pragmatic trials. Exclusion criteria for known safety issues would not necessarily count against a pragmatic trial, since such individuals would not be expected to get the intervention under usual practice.

Flexibility of experimental intervention

The pragmatic approach leaves the details of how to implement the experimental intervention up to the practitioners. For example, the details of how to perform a surgical procedure are left entirely to the surgeon. How to deliver an educational program is left to the discretion of the educator. In addition, the pragmatic approach would not dictate which co-interventions were permitted or how to deliver them. Several restrictions on the intervention’s flexibility are possible:

  • Specific direction could be given for administering the intervention (e.g., dose, dosing schedule, surgical tactics, educational material and delivery).

  • Timing of the delivery of the intervention could be designed to maximize the intervention effect.

  • The number and permitted types of co-interventions could be restricted, particularly if excluded co-interventions would dilute any intervention effect.

  • Specific direction could be given for applying permitted co-interventions.

  • Specific direction could be given for managing complications or side effects from the primary intervention.

Experimental intervention — practitioner expertise

A pragmatic approach would put the experimental intervention into the hands of all practitioners treating (educating, and others) the study participants. The choice of practitioner can be restricted in a number of ways:

  • Practitioners could be required to have some experience, defined by length of time, in working with the participants like the ones to be enrolled in the trial.

  • Some specialty certification appropriate to the intervention could be required.

  • For an intervention that has been in use (e.g., surgery) without a trial evaluation, experience with the intervention itself could be required.

  • Only practitioners who are deemed to have sufficient experience in the subjective opinion of the trial investigator would be invited to participate.

Flexibility of the comparison intervention

Specification of the flexibility of the comparison intervention complements that of the flexibility of the experimental intervention. A pragmatic trial would typically compare an intervention to “usual practice” or the best alternative management strategy available, whereas an explanatory trial would restrict the flexibility of the comparison intervention and might, in the case of early-phase drug development trials, use a placebo rather than the best alternative management strategy as the comparator.

Comparison intervention — practitioner expertise

Similar comments apply as for the specification of the flexibility of the comparison intervention. In both cases, the explanatory extreme would maximize the chances of detecting whatever benefits an intervention might have, whereas the pragmatic extreme would aim to find out the benefits and harms of the intervention in comparison with usual practice in the settings of interest.

Follow-up intensity

The pragmatic position would be not to seek follow-up contact with the study participants in excess of the usual practice for the practitioner. The most extreme position is to have no contact with study participants and instead obtain outcome data by other means (e.g., administrative databases to determine mortality). Various adjustments to follow-up intensity are possible. The extent to which these adjustments could lead to increased compliance or improved intervention response will determine whether follow-up intensity moves toward the explanatory end.

  • Follow-up visits (timing and frequency) are prespecified in the protocol.

  • Follow-up visits are more frequent than typically would occur outside the trial (i.e., under usual care).

  • Unscheduled follow-up visits are triggered by a primary outcome event.

  • Unscheduled follow-up visits are triggered by an intervening event that is likely to lead to the primary outcome event.

  • Participants are contacted if they fail to keep trial appointments.

  • More extensive data are collected, particularly intervention-related data, than would be typical outside the trial.

Often the required trial outcomes may be obtained only through contact with the participants. Even in the “no follow-up” approach, assessment of outcomes may be achieved with a single “end of study” follow-up. The end of study would need to be defined so that there is sufficient time for the desired study outcomes (see “Primary trial outcome” section) to be observed. When the follow-up is done in this way, it is unlikely to have an impact on compliance or responsiveness. However, there may often be considerable tension between unobtrusive follow-up and the ability to collect the necessary outcomes. Often, although not always, explanatory trials are interested in the effect of an intervention during the intervention period, or shortly afterward. On the other hand, pragmatic trials may follow patients well beyond the intervention period in their quest to answer the “does this work?” question. Such longer term follow-up may well require more patient contact than usual care. However, it is not necessarily inconsistent with a pragmatic approach if it does not result in patient management that differs from the usual conditions, which may in turn increase the chance of detecting an intervention effect beyond what would be expected under usual conditions.

Primary trial outcome

For primary trial outcome, it is more intuitive to begin from the explanatory pole and describe the progression to the pragmatic pole. The most explanatory approach would consider a primary outcome (possibly surrogate, as in dose-finding trials intended to demonstrate a biological response) that the experimental intervention is expected to have a direct effect on. Phase 3 and 4 trials often have patient-important outcomes and thus may be more pragmatic in this domain. There may well be central adjudication of the outcome, or assessment of the outcome may require special training or tests not normally used to apply outcome definition criteria. Two obvious relaxations of the strict outcome assessment present in explanatory trials are the absence of central outcome adjudication and the reliance on usual training and measurement to determine the outcome status. For some interventions, the issue may be whether to measure outcomes only during the intervention period or up to a “reasonable” time after the intervention is complete. For example, stroke could be a primary outcome for explanatory and pragmatic trials. However, time horizons may vary from short term following a one-time intervention (more explanatory) to long term (more pragmatic).

Participant compliance with “prescribed” intervention

The pragmatic approach recognizes that noncompliance with any intervention is a reality in routine medical practice. Because measurement of compliance may possibly alter subsequent compliance, the pragmatic approach in a trial would be not to measure or use compliance information in any way. The more rigorous a trial is in measuring and responding to noncompliance of the study participants, the more explanatory it becomes:

  • Compliance is measured (indirectly) purely for descriptive purposes at the conclusion of the trial.

  • Compliance data are measured and fed back to providers or participants during follow-up.

  • Uniform compliance-improving strategies are applied to all participants.

  • Compliance-improving strategies are applied to participants with documented poor compliance.

For some trials, the goal of an intervention may be to improve compliance with a treatment guideline. Provided the compliance measurement is not used, directly or indirectly, to influence subsequent compliance, a trial could still be “very pragmatic” in this domain. On the other hand, if measuring compliance is part of the intervention (e.g., audit and feedback), this domain would, appropriately, move toward a more explanatory approach if audit and feedback could not be similarly applied as part of the intervention under usual circumstances.

Practitioner adherence to study protocol

The pragmatic approach takes account of the fact that providers will vary in how they implement an intervention. A purely pragmatic approach, therefore, would not be concerned with how practitioners vary or “customize” a trial protocol to suit their setting. By monitoring and (especially) acting on protocol nonadherence, a trial shifts toward being more explanatory:

  • Adherence is measured (indirectly) purely for descriptive purposes at the conclusion of the trial.

  • Adherence data are measured and fed back to practitioners.

  • Uniform adherence-improving strategies are applied to all practitioners.

  • Adherence-improving strategies are applied to practitioners with documented poor adherence.

Analysis of the primary outcome

Recall that the pragmatic trial is concerned with the question “Does the intervention work under usual conditions?” Assuming other aspects of a trial have been treated in a pragmatic fashion, an analysis that makes no special allowance for non-compliance, nonadherence or practice variability, for example, is most appropriate for this question. So, the pragmatic approach to the primary analysis would typically be an intention-to-treat analysis of an outcome of direct relevance to the study participants and the population they represent. The intention-to-treat analysis is also the norm for explanatory trials, especially when regulatory approval for an intervention is being sought. However, there are various restrictions that may (additionally) be used to address the explanatory question “Can this intervention work under ideal conditions?”:

  • Exclude noncompliant participants.

  • Exclude patients found to be ineligible after randomization.

  • Exclude data from nonadherent practitioners.

  • Plan multiple subgroup analyses for groups thought to have the largest treatment effect.

For some explanatory trials (e.g,. dose-finding trials), it may be appropriate to have primary analysis restricted in the ways mentioned, otherwise such restricted analyses of the primary outcome would be preplanned as secondary analyses of the primary outcome. Note that, if all domains of the trial were designed in an explanatory fashion and the trial were conducted accordingly, the above restrictions should have very little impact. A purely pragmatic approach would not consider these restricted analyses.

Examples

To demonstrate the use of the PRECIS tool, we applied the instrument to 4 trials exhibiting varying degrees of pragmatic and explanatory approaches. Table 2 describes how these trials addressed the 10 domains previously described. As we have stated previously, the PRECIS tool is intended to be used at the design stage. We have applied it post-hoc to these examples for illustrative purposes only.

View this table:
  • View inline
  • View popup
  • Download powerpoint

Table 2: A PRECIS assessment of 4 trials (part 1)

View this table:
  • View inline
  • View popup
  • Download powerpoint

Table 2: A PRECIS assessment of 4 trials (part 2)

View this table:
  • View inline
  • View popup
  • Download powerpoint

Table 2: A PRECIS assessment of 4 trials (part 3)

The first example uses the trial of self-supervised and directly observed treatment of tuberculosis (DOT). 8 The DOT trial asked the question: Among South African adults with newly diagnosed pulmonary tuberculosis, does direct observation of pill swallowing 5 times weekly by a nurse in the clinic, compared with self-administration, increase the probability that patients will take more than 80% of the doses within 7 months of starting treatment, with no interruptions of more than 2 weeks? In this example, the experimental intervention was self-administration and the comparison intervention was DOT, which was widely used (throughout South Africa and elsewhere) but not adequately evaluated.

The second example uses the North American Symptomatic Carotid Endarterectomy Trial (NASCET). 9 The NASCET trial asked the question: Among patients with symptomatic stenosis (70%–99%) of a carotid artery (and therefore at high risk of stroke), can the addition of carotid endarterectomy (performed by an expert vascular or neuro-surgeon with an excellent track record) to best medical therapy, compared with best medical therapy alone, reduce the outcomes of major stroke or death over the next 2 years?

The third example uses the Collaborative Low-dose Aspirin Study in Pregnancy (CLASP) trial. 10 The placebo-controlled trial was designed to “provide reliable evidence about the overall safety of low-dose aspirin use in pregnancy and to find out whether treatment really produces worthwhile effects on morbidity and on fetal and neonatal mortality.”

The final example uses the trial by Caritis and colleagues. 11 This is another placebo-controlled trial of ASA designed to determine whether low-dose ASA therapy could reduce the incidence of pre-eclampsia among women at high risk for this condition.

Figure 1 shows a blank wheel plot for summarizing the 10 indicators. All that is left is to mark each spoke to represent the location on the explanatory (hub) to pragmatic (rim) continuum and connect the dots.

Given the tactics used in the DOT trial in each of these dimensions, if we link each of the dots to its immediate neighbour, we get a visual representation of the very broad pragmatic approach of this trial (Figure 2A). Similarly, given the tactics used in the NASCET trial in each of these domains, Figure 2B provides a visual representation of the mostly narrow explanatory approach of this trial. The final 2 examples are trials of the same intervention for the same condition. It can be seen from Figure 2C and Figure 2D that the CLASP trial tended to be more pragmatic than the trial by Caritis and colleagues.

Figure
  • Download figure
  • Open in new tab
  • Download powerpoint

Figure 2:A: PRECIS summary of a randomized controlled trial of self-supervised and directly observed treatment of tuberculosis (DOT). 8B: PRECIS summary of the North American Symptomatic Carotid Endarterectomy Trial (NASCET) of carotid endarterectomy in symptomatic patients with high-grade carotid stenosis. 9C: PRECIS summary of a randomized trial of low-dose acetylsalicylic acid (ASA) therapy for the prevention and treatment of pre-eclampsia (CLASP). 10D: PRECIS summary of a randomized trial of low-dose ASA for the prevention of pre-eclampsia in women at high risk. 11 “E” represents the “explanatory” end of the pragmatic–explanatory continuum.

Comment

The PRECIS tool is an initial attempt to identify and quantify trial characteristics that distinguish between pragmatic and explanatory trials to assist researchers in designing trials. As such, we welcome suggestions for its further development. For example, the tool is applicable to individually randomized trials. It would probably apply to cluster randomized trials as well, but we have not tested it for those designs.

It is not hard to imagine that a judgment call is required to position the dots on the wheel diagram, especially for domains that are not at an extreme. Because trials are typically designed by a team of researchers, the PRECIS tool should be used by all involved in the design of the trial, leading to a consensus view on where the trial is situated within the pragmatic–explanatory continuum. The possible subjectiveness of dot placement should help focus the researcher’s attention on those domains that are not as pragmatic or explanatory as they would like. Clearly, domains where consensus is difficult to achieve warrant more attention.

There are other characteristics that may more often be present in pragmatic trials but, because they can also be found in explanatory trials, are not immediately helpful for discrimination. An appreciation of these characteristics helps round out the picture somewhat and assists with the interpretation of a given trial. For example, in a pragmatic trial, the comparison intervention is, by definition, standard care. So, one would be unlikely to use a placebo group in a pragmatic trial. Therefore, although the presence of a placebo group suggests an explanatory trial, absence of a placebo group does not necessarily suggest a pragmatic trial. Another example of this is blinding, whether it be blinded intervention delivery or outcome assessment blinded to treatment assignment. Blinding is desirable in all trials to the extent possible. Blinding may be less practical to achieve in some pragmatic trials, but that does not imply that blinding is inconsistent with a pragmatic trial.

Understanding the context for the applicability of the trial results is essential for all trials. For example, the intervention studied in a pragmatic trial should be one that is feasible to implement in the “real world” after the completion of the trial. However, feasibility is often context specific. For example, an intervention could be easy to implement in Ontario, Canada, but all but impossible to implement in a low-income country because of cost, different health care delivery systems and many other reasons.

Our initial experiences developing the PRECIS tool suggest that it has the potential to be useful for trial design, although we anticipate that some refinement of the scales will be required. The reporting of pragmatic trials is addressed elsewhere. 4 The simple graphical summary is a particularly appealing feature of this tool. We believe it has value for the planning of trials and the assessment of whether the design of a trial is fit for purpose. The tool can help ensure the right balance is struck to achieve the primary purpose of a trial, which may be to answer an “explanatory” question about whether an intervention can work under ideal conditions or to answer a “pragmatic” question about whether an intervention does work under usual conditions. The PRECIS tool highlights the multidimensional nature of the pragmatic–explanatory continuum. This multidimensional structure should be borne in mind by trial designers and end-users alike so that overly simplistic labelling of trials can be avoided.

We would also like to caution readers to not confound the structure of a trial with its usefulness to potential users. Schwartz and Lellouch clearly linked the ability of a trial to meet its purpose with decisions about how the trial is designed and that, taken together, these decisions affect where the trial is placed on the explanatory–pragmatic continuum. 1 However, how useful a trial is depends not only on design but on the similarity between the user’s context and that of the trial. Although it is unreasonable to expect the results of a trial to apply in all contexts, trials should be designed and reported in such a way that users of the results can make meaningful judgments about applicability to their own context. 12

Finally, we stress that this article, building on earlier work from multiple investigators, describes a “work in progress.” We welcome suggestions from all who read it, especially those who wish to join us in its further development. The words with which Schwartz and Lellouch closed their 1967 paper continue to apply: “This article makes no pretention to originality, nor to the provision of solutions; we hope we have clarified certain issues to the extent of encouraging further discussion.”

See related commentaries by Zwarenstein and Treweek, page 998, and by Maclure, page 1001

Footnotes

  • Published at www.cmaj.ca on Apr. 16, 2009. An abridged version of this article appeared in the May 12 issue of CMAJ. This article was published simultaneously in the May 2009 issue of the Journal of Clinical Epidemiology (www.jclinepi.com).

    Competing interests: None declared.

    Contributors: All of the authors made significant contributions to the intellectual content of this paper, reviewed multiple drafts for important omissions and have approved the final manuscript.

    Acknowledgements: We are especially indebted to Dr. David L. Sackett for his encouragement and advice during the development of the tool and preparation of this manuscript. We would also like to acknowledge the contributions made by the numerous attendees at the Toronto workshops in 2005 and 2008.

    The Practihc group was supported by the European Commission’s 5th Framework INCO program (contract ICA4-CT-2001-10019). The 2005 Toronto meeting was supported by a Canadian Institutes for Health Research grant (no. FRN 63095). The 2008 Toronto meeting was supported by the UK Medical Research Council, the Centre for Health Services Sciences at Sunny-brook Health Sciences Centre, Toronto, Canada, the Center for Medical Technology Policy, Baltimore, USA, and the National Institute for Health and Clinical Excellence, London, UK.

REFERENCES

  1. 1.↵
    Schwartz D, Lellouch J. Explanatory and pragmatic attitudes in therapeutical trials. J Chronic Dis 1967;20:637–48. [Reprinted in J Clin Epidemiol 2009;62:499–505.]
    OpenUrl
  2. 2.↵
    Sackett DL. Explanatory vs. management trials. In: Haynes RB, Sackett DL, Guyatt GH, et al., editors. Clinical epidemiology: how to do clinical practice research. Philadelphia (PA): Lippincott, Williams and Wilkins; 2006.
  3. 3.↵
    Gartlehner G, Hansen RA, Nissman D, et al. A simple and valid tool distinguished efficacy from effectiveness studies. J Clin Epidemiol 2006;59:1040–8.
    OpenUrlCrossRefPubMed
  4. 4.↵
    Zwarenstein M, Treweek S, Gagnier J, et al.; CONSORT and Pragmatic Trials in Healthcare (Practihc) groups. Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ 2008;337:a2390.
    OpenUrlAbstract/FREE Full Text
  5. 5.↵
    Tunis SR, Stryer DB, Clancy CM. Practical clinical trials: increasing the value of clinical research for decision making in clinical and health policy. JAMA 2003;290:1624–32.
    OpenUrlCrossRefPubMed
  6. 6.↵
    Tunis SR. A clinical research strategy to support shared decision making. Health Aff 2005;24:180–4.
    OpenUrlAbstract/FREE Full Text
  7. 7.↵
    Scheel IB, Hagen KB, Herrin J, et al. Blind faith? The effects of promoting active sick leave for back pain patients. A cluster-randomized trial. Spine 2002; 27:2734–40.
    OpenUrlCrossRefPubMed
  8. 8.↵
    Zwarenstein M, Schoeman JH, Vundule C, et al. Randomised controlled trial of self-supervised and directly observed treatment of tuberculosis. Lancet 1998;352: 1340–3.
    OpenUrlCrossRefPubMed
  9. 9.↵
    North American Symptomatic Carotid Endarterectomy Trial Collaborators. Beneficial effect of carotid endarterectomy in symptomatic patients with high-grade carotid stenosis. N Engl J Med 1991;325:445–53.
    OpenUrlCrossRefPubMed
  10. 10.↵
    CLASP (Collaborative Low-dose Aspirin Study in Pregnancy) Collaborative Group. CLASP: a randomized trial of low-dose aspirin for the prevention and treatment of pre-eclampsia among 9364 pregnant women. Lancet 1994;343:619–29.
    OpenUrlCrossRefPubMed
  11. 11.↵
    Caritis S, Sibai B, Hauth J, et al. Low-dose aspirin to prevent preeclampsia in women at high risk. N Engl J Med 1998;338:701–5.
    OpenUrlCrossRefPubMed
  12. 12.↵
    Rothwell PM. External validity of randomised controlled trials: “To whom do the results of this trial apply?”Lancet 2005;365:82–93.
    OpenUrlCrossRefPubMed
PreviousNext
Back to top

In this issue

Canadian Medical Association Journal: 180 (10)
CMAJ
Vol. 180, Issue 10
12 May 2009
  • Table of Contents
  • Index by author

Article tools

Respond to this article
Print
Download PDF
Article Alerts
To sign up for email alerts or to access your current email alerts, enter your email address below:
Email Article

Thank you for your interest in spreading the word on CMAJ.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
A pragmatic–explanatory continuum indicator summary (PRECIS): a tool to help trial designers
(Your Name) has sent you a message from CMAJ
(Your Name) thought you would like to see the CMAJ web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
A pragmatic–explanatory continuum indicator summary (PRECIS): a tool to help trial designers
Kevin E. Thorpe, Merrick Zwarenstein, Andrew D. Oxman, Shaun Treweek, Curt D. Furberg, Douglas G. Altman, Sean Tunis, Eduardo Bergel, Ian Harvey, David J. Magid, Kalipso Chalkidou
CMAJ May 2009, 180 (10) E47-E57; DOI: 10.1503/cmaj.090523

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
‍ Request Permissions
Share
A pragmatic–explanatory continuum indicator summary (PRECIS): a tool to help trial designers
Kevin E. Thorpe, Merrick Zwarenstein, Andrew D. Oxman, Shaun Treweek, Curt D. Furberg, Douglas G. Altman, Sean Tunis, Eduardo Bergel, Ian Harvey, David J. Magid, Kalipso Chalkidou
CMAJ May 2009, 180 (10) E47-E57; DOI: 10.1503/cmaj.090523
Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like

Jump to section

  • Article
    • Ten ways in which pragmatic and explanatory trials can differ
    • Development of the PRECIS tool
    • The domains in detail
    • Examples
    • Comment
    • Footnotes
    • REFERENCES
  • Figures & Tables
  • Responses
  • Metrics
  • PDF

Related Articles

  • Dans ce numéro
  • What kind of randomized trials do we need?
  • Explaining pragmatic trials to pragmatic policy-makers
  • Highlights
  • PubMed
  • Google Scholar

Cited By...

  • Days alive and at home after hip fracture: a cross-sectional validation of a patient-centred outcome measure using routinely collected data
  • Development, implementation, evaluation and scaling-up of physical activity referral schemes in Germany: protocol for a study using a co-production approach
  • Can we sooth the subconscious during general anaesthesia?
  • Delivery of long-term-injectable agents for TB by lay carers: pragmatic randomised trial
  • Effectiveness of a hospital-based postnatal parent education intervention about pain management during infant vaccination: a randomized controlled trial
  • Treatment of Advanced Glaucoma Study: a multicentre randomised controlled trial comparing primary medical treatment with primary trabeculectomy for people with newly diagnosed advanced glaucoma--study protocol
  • Intermittent versus continuous oxygen saturation monitoring for infants hospitalised with bronchiolitis: study protocol for a pragmatic randomised controlled trial
  • User involvement in adolescents mental healthcare: protocol for a systematic review
  • Comparison of Outcomes of antibiotic Drugs and Appendectomy (CODA) trial: a protocol for the pragmatic randomised study of appendicitis treatment
  • Outcomes-Based CV Imaging Research Endpoints and Trial Design: From Pixels to Patient Satisfaction
  • Pragmatic Clinical Trials: Implementation Opportunity, or Just Another Fad?
  • Promoting HPV Vaccination in Safety-Net Clinics: A Randomized Trial
  • Efficacy of a dual-ring wound protector for prevention of incisional surgical site infection after Whipple's procedure (pancreaticoduodenectomy) with preoperatively-placed intrabiliary stents: protocol for a randomised controlled trial
  • Comparison of blood pressure measurements using an automated blood pressure device in community pharmacies and family physicians' offices: a randomized controlled trial
  • SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials
  • Office-Based Intervention to Reduce Bottle Use Among Toddlers: TARGet Kids! Pragmatic, Randomized Trial
  • Explaining pragmatic trials to pragmatic policy-makers
  • What kind of randomized trials do we need?
  • Google Scholar

More in this TOC Section

  • Physician workforce planning in Canada: the importance of accounting for population aging and changing physician hours of work
  • Gaslighting in academic medicine: where anti-Black racism lives
  • Assessing the need for Black mentorship within residency training in Canada
Show more Analysis

Similar Articles

 

View Latest Classified Ads

Content

  • Current issue
  • Past issues
  • Collections
  • Sections
  • Blog
  • Podcasts
  • Alerts
  • RSS
  • Early releases

Information for

  • Advertisers
  • Authors
  • Reviewers
  • CMA Members
  • CPD credits
  • Media
  • Reprint requests
  • Subscribers

About

  • General Information
  • Journal staff
  • Editorial Board
  • Advisory Panels
  • Governance Council
  • Journal Oversight
  • Careers
  • Contact
  • Copyright and Permissions
  • Accessibiity
  • CMA Civility Standards
CMAJ Group

Copyright 2023, CMA Impact Inc. or its licensors. All rights reserved. ISSN 1488-2329 (e) 0820-3946 (p)

All editorial matter in CMAJ represents the opinions of the authors and not necessarily those of the Canadian Medical Association or its subsidiaries.

To receive any of these resources in an accessible format, please contact us at CMAJ Group, 500-1410 Blair Towers Place, Ottawa ON, K1J 9B9; p: 1-888-855-2555; e: cmajgroup@cmaj.ca

Powered by HighWire