Intended for healthcare professionals

Education And Debate Getting research findings into practice

Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings

BMJ 1998; 317 doi: https://doi.org/10.1136/bmj.317.7156.465 (Published 15 August 1998) Cite this as: BMJ 1998;317:465
  1. Lisa A Bero, associate professora,
  2. Roberto Grilli, headb,
  3. Jeremy M Grimshaw, programme director (j.m.grimshaw{at}abdn.ac.uk)c,
  4. Emma Harvey, research fellowd,
  5. Andrew D Oxman, directore,
  6. Mary Ann Thomson, senior research fellowc
  1. Institute for Health Policy Studies, University of California at San Francisco, 1388 Sutter Street, 11th floor, San Francisco, CA 94109, USA
  2. Unit of Clinical Policy Analysis, Laboratory of Clinical Epidemiology, Istituto di Ricerche Farmacologiche Mario Negri, Via Eritrea 62, 20157 Milan, Italy
  3. Health Services Research Unit, Department of Public Health, Aberdeen AB25 2ZD
  4. Department of Health Sciences and Clinical Evaluation, University of York, York YO1 5DD
  5. Health Services Research Unit, National Institute of Public Health, PO Box 4404 Torshov, N-0462 Oslo, Norway
  1. cCorrespondence to: Dr Grimshaw

    This is the seventh in a series of eight articles analysing the gap between research and practice

    Series editors: Andrew Haines and Anna Donald

    Despite the considerable amount of money spent on clinical research relatively little attention has been paid to ensuring that the findings of research are implemented in routine clinical practice.1 There are many different types of intervention that can be used to promote behavioural change among healthcare professionals and the implementation of research findings. Disentangling the effects of intervention from the influence of contextual factors is difficult when interpreting the results of individual trials of behavioural change.2 Nevertheless, systematic reviews of rigorous studies provide the best evidence of the effectiveness of different strategies for promoting behavioural change. 3 4 In this paper we examine systematic reviews of different strategies for the dissemination and implementation of research findings to identify evidence of the effectiveness of different strategies and to assess the quality of the systematic reviews.

    Summary points

    Systematic reviews of rigorous studies provide the best evidence on the effectiveness of different strategies to promote the implementation of research findings

    Passive dissemination of information is generally ineffective

    It seems necessary to use specific strategies to encourage implementation of research based recommendations and to ensure changes in practice

    Further research on the relative effectiveness and efficiency of different strategies is required

    Identification and inclusion of systematicreviews

    We searched Medline records dating from 1966 to June 1995 using a strategy developed in collaboration with the NHS Centre for Reviews and Dissemination. The search identified 1139 references. No reviews from the Cochrane Effective Practice and Organisation of Care Review Group4 had been published during this time. In addition, we searched the Database of Abstracts of Research Effectiveness (DARE) (http://www.york.ac.uk/inst/crd) but did not identify any other review meeting the inclusion criteria.

    We searched for any review of interventions to improve professional performance that reported explicit selection criteria and in which the main outcomes considered were changes in performance or outcome. Reviews that did not report explicit selection criteria, systematic reviews focusing on the methodological quality of published studies, published bibliographies, bibliographic databases, and registers of projects on dissemination activities were excluded from our review. If systematic reviews had been updated we considered only the most recently published review. For example, the Effective Health Care bulletin on implementing clinical guidelines superseded the earlier review by Grimshaw and Russell. 5 6

    Two reviewers independently assessed the quality of the reviews and extracted data on the focus, inclusion criteria, main results, and conclusions of each review. A previously validated checklist (including nine criteria scored as done, partially done, or not done) was used to assess quality. 7 8 Reviews also gave a summary score (out of seven) based on the scientific quality of the review. Major disagreements between reviewers were resolved by discussion and consensus.

    Resultsand assessment of systematic reviews

    We identified 18 reviews that met the inclusion criteria. They were categorised as focusing on broad strategies (such as the dissemination and implementation of guidelines 5 6 99–11 ), continuing medical education, 12 13 particular strategies (such as audit and feedback, 14 15 computerised decision support systems, 16 17 or multifaceted interventions18), particular target groups (for example, nurses19 or primary healthcare professionals20), and particular problem areas or types of behaviour (for example, diagnostic testing,15 prescribing,21 or aspects of preventive care 15 6 2225 ). Most primary studies were included in more than one review, and some reviewers published more than one review. No systematic reviews published before 1988 were identified. None of the reviews explicitly addressed the cost effectiveness of different strategies for effecting changes in behaviour.

    There was a lack of a common approach adopted between the reviews in how interventions and potentially confounding factors were categorised. The inclusion criteria and methods used in these reviews varied considerably. Interventions were frequently classed differently in the different systematic reviews.

    Common methodological problems included the failure to adequately report criteria for selecting studies included in the review, the failure to avoid bias in the selection of studies, the failure to adequately report criteria used to assess validity, and the failure to apply criteria to assess the validity of the selected studies. Overall, 42% (68/162) of criteria were reported as having been done, 49% (80/162) as having been partially done, and 9% (14/162) as not having been done. The mean summary score was 4.13 (range 2 to 6, median 3.75, mode 3).

    Encouragingly, reviews published more recently seemed to be of better quality. For studies published between 1988 and 1991 (n=6) only 20% (11/54) of criteria were scored as having been done (mean summary score 3.0); for reviews published after 1991 (n=12) 52% (56/108) of criteria were scored as having been done (mean summary score 4.7).


    Embedded Image

    Five reviews attempted formal meta-analyses of the results of the studies identified. 12 17 19 23 25 The appropriateness of meta-analysis in three of these reviews is uncertain, 12 17 19 and the reviews should be considered exploratory at best, given the broad focus and heterogeneity of the studies included in the reviews with respect to the types of interventions, targeted behaviours, contextual factors, and other research factors.2

    A number of consistent themes were identified by the systematic reviews (box). (Further details about the systematic reviews are available on the BMJ's website.) Most of the reviews identified modest improvements in performance after interventions. However, the passive dissemination of information was generally ineffective in altering practices no matter how important the issue or how valid the assessment methods. 5 9 11 13 21 26 The use of computerised decision support systems has led to improvements in the performance of doctors in terms of decisions on drug dosage, the provision of preventive care, and the general clinical management of patients, but not in diagnosis.16 Educational outreach visits have resulted in improvements in prescribing decisions in North America. 5 13 Patient mediated interventions also seem to improve the provision of preventive care in North America (where baseline performance is often very low).13 Multifaceted interventions (that is, a combination of methods that includes two or more interventions such as participation in audit and a local consensus process) seem to be more effective than single interventions. 13 18 There is insufficient evidence to assess the effectiveness of some interventions—for example the identification and recruitment of local opinion leaders (practitioners nominated by their colleagues as influential).5

    Interventions to promote behavioural change among health professionals

    Consistently effective interventions

    • Reminders (manual or computerised)

    • Multifaceted interventions (a combination that includes two or more of the following: audit and feedback, reminders, local consensus processes, or marketing)

    • Interactive educational meetings (participation of healthcare providers in workshops thatinclude discussion or practice)

    Interventions of variable effectiveness

    • Audit and feedback (or any summary of clinical performance)

    • The use of local opinion leaders (practitioners identified by their colleagues as influential)

    • Local consensus processes (inclusion of participating practitioners in discussions to ensure that they agree that the chosen clinical problem is important and the approach to managing the problem is appropriate)

    • Patient mediated interventions (any intervention aimed at changing the performance of healthcare providers for which specific information was sought from or given to patients)

    Interventions that have little or no effect

    • Educational materials (distribution of recommendations for clinical care, including clinical practice guidelines, audiovisual materials, and electronic publications)

    • Didactic educational meetings (such as lectures)

    Few reviews attempted explicitly to link their findings to theories of behavioural change. The difficulties associated with linking findings and theories are illustrated in the review by Davis et al, who found that the results of their overview supported several different theories of behavioural change.13

    Availability and quality of primary studies

    This overview also allows the opportunity to estimate the availability and quality of primary research in the areas of dissemination and implementation. Identification of published studies on behavioural change is difficult because they are poorly indexed and scattered across generalist and specialist journals. Nevertheless, two reviews provided an indication of the extent of research in this area. Oxman et al identified 102 randomised or quasirandomised controlled trials involving 160 comparisons of interventions to improve professional practice.11 The Effective Health Care bulletin on implementing clinical guidelines identified 91 rigorous studies (including 63 randomised or quasirandomised controlled trials and 28 controlled before and after studies or time series analyses).5 Even though the studies included in these two reviews fulfilled the minimum inclusion criteria, some are methodologically flawed and have potentially major threats to their validity. Many studies randomised health professionals or groups of professionals (cluster randomisation) but analysed the results by patient, thus resulting in a possible overestimation of the significance of the observed effects (unit of analysis error).27 Given the small to moderate size of the observed effects this could lead to false conclusions about the significance of the effectiveness of interventions in both meta-analyses and qualitative analyses. Few studies attempted to undertake any form of economic analysis.

    Given the importance of implementing the results of sound research and the problems of generalisability across different healthcare settings, there are relatively few studies of individual interventions to effect behavioural change. The review by Oxman et al identified studies involving 12 comparisons of educational materials, 17 of conferences, four of outreach visits, six of local opinion leaders, 10 of patient mediated interventions, 33 of audit and feedback, 53 of reminders, two of marketing, eight of local consensus processes, and 15 of multifaceted interventions.11 Few studies compared the relative effectiveness of different strategies; only 22 out of 91 studies reviewed in the Effective Health Care bulletin allowed comparisons of different strategies.5 A further limitation of the evidence about different types of interventions is that the research is often conducted by limited numbers of researchers in specific settings. The generalisability of these findings to other settings is uncertain, especially because of the marked differences in undergraduate and postgraduate education, the organisation of healthcare systems, potential systemic incentives and barriers to change, and societal values and cultures. Most of the studies reviewed were conducted in North America; only 14 of the 91 studies reviewed in the Effective Health Care bulletin had been conducted in Europe.5

    The wayforward

    This overview suggests that there is an increasing amount of primary and secondary research in the areas of dissemination and implementation. It is striking how little is known about the effectiveness and cost effectiveness of interventions that aim to change the practice or delivery of health care. The reviews that we examined suggest that the passive dissemination of information (for example, publication of consensus conferences in professional journals or the mailing of educational materials) is generally ineffective and, at best, results only in small changes in practice. However, these passive approaches probably represent the most common approaches adopted by researchers, professional bodies, and healthcare organisations. The use of specific strategies to implement research based recommendations seems to be necessary to ensure that practices change, and studies suggest that more intensive efforts to alter practice are generally more successful.

    At a local level greater attention needs to be given to actively coordinating dissemination and implementation to ensure that research findings are implemented. The choice of intervention should be guided by the evidence on the effectiveness of dissemination and implementation strategies, the characteristics of the message,10 the recognition of external barriers to change,13 and the preparedness of the clinicians to change.28 Local policymakers with responsibility for professional education or quality assurance need to be aware of the results of implementation research, develop expertise in the principles of the management of change, and accept the need for local experimentation.

    Given the paucity of evidence it is vital that dissemination and implementation activities should be rigorously evaluated whenever possible. Studies evaluating a single intervention provide little new information about the relative effectiveness and cost effectiveness of different interventions in different settings. Greater emphasis should be given to conducting studies that evaluate two or more interventions in a specific setting or help clarify the circumstances that are likely to modify the effectiveness of an intervention. Economic evaluations should be considered an integral component of research. Researchers should have greater awareness of the issues related to cluster randomisation, and should ensure that studies have adequate power and that they are analysed using appropriate methods.29

    The NHS research and development programme on evaluating methods to promote the implementation of research and development is an important initiative that will contribute to our knowledge of the dissemination of information and the implementation of research findings.30 However, these research issues cut across national and cultural differences in the practice and financing of health care. Moreover, the scope of these issues is such that no one country's health services research programme can examine them in a comprehensive way. This suggests that there are potential benefits of international collaboration and cooperation in research, as long as appropriate attention is paid to cultural factors that might influence the implementation process such as the beliefs and perceptions of the public, patients, healthcare professionals, and policymakers.

    The results of primary research should be systematically reviewed to identify promising implementation techniques and areas where more research is required.3 Undertaking reviews in this area is difficult because of the complexity inherent in the interventions, the variability in the methods used, and the difficulty of generalising study findings across healthcare settings. The Cochrane Effective Practices and Organisation of Care Review Group is helping to meet the need for systematic reviews of current best evidence on the effects of continuing medical education, quality assurance, and other interventions that affect professional practice. A growing number of these reviews are being published and updated in the Cochrane Database of Systematic Reviews. 4 31

    The articles in this series are adapted from Getting Research Findings into Practice, edited by Andrew Haines and Anna Donald and published by BMJ Books.

    Acknowledgments

    This paper is based on a briefing paper prepared by the authors for the Advisory Group on the NHS research and development programme on evaluating methods to promote the implementation of research and development. We thank Nick Freemantle for his contribution to this paper.

    Funding: This work was partly funded by the European Community funded Eur-Assess project. The Cochrane Effective Practice and Organisation of Care Review Group is funded by the Chief Scientist Office of the Scottish Office Home and Health Department; the NHS Welsh Office of Research and Development; the Northern Ireland Department of Health and Social Services; the research and development offices of the Anglia and Oxford, North Thames, North West, South and West, South Thames, Trent, and West Midlands regions; and by the Norwegian Research Council and Ministry of Health and Social Affairs in Norway. The Health Services Research Unit is funded by the Chief Scientist Office of the Scottish Office Home and Health Department. The views expressed are those of the authors and not necessarily the funding bodies.

    Conflict of interest: None.

    References