Effect of point-of-care computer reminders on physician behaviour: a systematic review ====================================================================================== * Kaveh G. Shojania * Alison Jennings * Alain Mayhew * Craig Ramsay * Martin Eccles * Jeremy Grimshaw ## Abstract **Background:** The opportunity to improve care using computer reminders is one of the main incentives for implementing sophisticated clinical information systems. We conducted a systematic review to quantify the expected magnitude of improvements in processes of care from computer reminders delivered to clinicians during their routine activities. **Methods:** We searched the MEDLINE, Embase and CINAHL databases (to July 2008) and scanned the bibliographies of retrieved articles. We included studies in our review if they used a randomized or quasi-randomized design to evaluate improvements in processes or outcomes of care from computer reminders delivered to physicians during routine electronic ordering or charting activities. **Results:** Among the 28 trials (reporting 32 comparisons) included in our study, we found that computer reminders improved adherence to processes of care by a median of 4.2% (interquartile range [IQR] 0.8%–18.8%). Using the best outcome from each study, we found that the median improvement was 5.6% (IQR 2.0%–19.2%). A minority of studies reported larger effects; however, no study characteristic or reminder feature significantly predicted the magnitude of effect except in one institution, where a well-developed, “homegrown” clinical information system achieved larger improvements than in all other studies (median 16.8% [IQR 8.7%–26.0%] v. 3.0% [IQR 0.5%–11.5%]; *p* = 0.04). A trend toward larger improvements was seen for reminders that required users to enter a response (median 12.9% [IQR 2.7%–22.8%] v. 2.7% [IQR 0.6%–5.6%]; *p* = 0.09). **Interpretation:** Computer reminders produced much smaller improvements than those generally expected from the implementation of computerized order entry and electronic medical record systems. Further research is required to identify features of reminder systems consistently associated with clinically worthwhile improvements. Computerized systems for entering orders and electronic medical records represent two of the most widely recommended improvements in health care. 1 These systems offer the opportunity to improve practice by delivering reminders to clinicians at the point of care. Such reminders range from simple prescribing alerts to more sophisticated support for decision-making. Previous reviews have classified all computer reminders together, including computer-generated paper reminders and email alerts sent to providers, along with reminders generated at the point of care. 2–5 They have also typically reported the proportion of studies with results that were on balance “positive.” 2–4 We conducted a systematic review to quantify the expected magnitude of improvements in processes of care from computer reminders delivered to physicians during their routine electronic ordering or charting activities. ## Methods ### Data sources We searched the MEDLINE database (1950 to July 2008) using relevant Medical Subject Headings and combinations of text words such as “computer” or “electronic” with terms such as “reminder,” “prompt,” “alert” and “support.” A methodologic filter identified all potential clinical trials. We similarly searched the Embase and CINAHL databases (both to July 2008). We also retrieved all articles that mentioned computers, reminder systems or decision support from the Cochrane Effective Practice and Organisation of Care registry ([www.epoc.cochrane.org/welcome](http://www.epoc.cochrane.org/welcome)), which covers multiple bibliographic databases. Finally, we scanned reference lists of all included studies and review articles. For non-English-language articles, we screened English translations of titles and abstracts, pursuing a full-text translation as needed to determine inclusion or exclusion of the study. ### Study selection Eligible studies evaluated the effects of computer reminders on processes or outcomes of care using a randomized or quasi-randomized controlled design (allocation on the basis of an arbitrary but not truly random process, such as even or odd patient identification numbers). We required that clinicians encounter the reminder during routine performance of the activities of interest, such as prescribing medications or documenting clinical information. Reminders that required clinicians to deviate from their usual activities (e.g., to use a special program without any prompt from the main clinical information system) were excluded because relying on users to remember to call up such resources undermined the core notion of a reminder. ### Outcomes We focused primarily on improvements in processes of care rather than on clinical outcomes, because we wished to determine the degree to which computer reminders achieved their main goal, namely changing provider behaviour. The degree to which such changes ultimately improve patient outcomes will vary depending on the strength of the relation between targeted processes and clinical outcomes. Consequently, if computer reminders do not improve patient outcomes, this may reflect inadequate connections between the targeted processes and outcomes of care rather than a failure to change physician behaviour. Nonetheless, we did capture clinical out-comes, including intermediate outcomes such as control of blood pressure. We excluded outcomes primarily related to resource use, such as length of hospital stay. We standardized all outcomes so that increases always corresponded to improvements in care. For instance, if a study reported the proportion of patients who received inappropriate medications, we would record the complementary proportion of patients who received appropriate care. ### Data extraction For any given article, two of three investigators (K.S., A.J. or A.M.) independently screened the citation for inclusion. They abstracted the following data from included articles: clinical setting, number of participants, methodologic details, characteristics of the computer reminder, the presence of cointerventions, and the results for eligible outcomes. Discrepancies between the two reviewers were resolved by discussion, involving the third reviewer if necessary to achieve consensus. ### Statistical analysis We anticipated that many studies would assign intervention status at the provider level but would not account for “cluster effects” when analyzing patient-level data. 6,7 Correcting for clustering effects can sometimes be achieved by estimating the intraclass correlation coefficients, especially if the primary studies all report the same outcome and a minority provide relevant data upon which to base imputations. 8 In this case, however, few studies contained the necessary data, and studies tended to report multiple outcomes, which required an additional assumption that correlations within clusters do not vary across different outcomes. To preserve the goal of quantifying the effects of computer reminders without resorting to numerous assumptions and conveying a misleading degree of precision, we focused on the median and interquartile range (IQR) for improvements reported by eligible studies. This method, first used in a large review of strategies for implementing guidelines, 9 has since been applied in Cochrane reviews of interventions to improve practice 10–14 and other systematic reviews of quality improvement interventions. 15–18 Quantifying the median improvement involves two distinct uses of “median.” First, to handle multiple outcomes within individual studies, we identified the median improvement across each study’s eligible outcomes. If a study reported 10 adherence-related outcomes, we calculated the median absolute difference in adherence between the intervention and control groups. With each study represented by its median outcome, we then calculated the median effect and IQR across all included studies. For the purposes of sensitivity analyses, we repeated this calculation using the best outcome from each study. The median and IQR convey the magnitudes of improvement achieved in the majority of studies. This method avoids skewing by a few outlying studies with highly positive results and 95% confidence intervals inappropriately narrowed by ignoring important clustering effects. It also permits nonparametric analyses of potential associations between study features and effect size in order to examine subgroups of studies with larger or smaller magnitudes of effect. For instance, we looked for associations between magnitude of effect and study size, markers of methodologic quality, features of the study context (e.g., ambulatory v. inpatient) and characteristics of the reminders (e.g., requiring users to enter a response before continuing with their work). We performed all such comparisons using a nonparametric Mann–Whitney rank-sum test. ## Results Of 2036 citations identified, we excluded 1662 at the initial stage of screening and an additional 374 after review of the full-text articles. A total of 28 articles (reporting 32 comparisons) met all of our inclusion criteria (Figure 1). 19–46 The full review has recently been published in The Cochrane Library. 47 ![Figure1](http://www.cmaj.ca/https://www.cmaj.ca/content/cmaj/182/5/E216/F1.medium.gif) [Figure1](http://www.cmaj.ca/content/182/5/E216/F1) **Figure 1:** Results of literature search. *Excluded topics included expert systems (e.g., artificial intelligence or neural network applications) for facilitating diagnosis or for estimating prognosis; decision support not directly related to patient care (e.g., coding medical records); and reminders directed primarily at nonphysicians. Of the 32 comparisons, 19 were in the United States and 8 occurred in inpatient settings (Table 1, located at the end of the article). Only six comparisons involved a quasi-randomized design, typically allocating intervention status on the basis of even or odd provider identification numbers. Twenty-six comparisons allocated intervention status to providers or provider groups (cluster trials); 12 of these comparisons accounted for clustering effects in the analysis. Seventeen trials reported a power calculation that included a target effect size. Twelve trials reported a target improvement in adherence to processes of care; 10 of these trials specified an absolute increase of at least 10% (Table 1). View this table: [Table1](http://www.cmaj.ca/content/182/5/E216/T1) **Table 1:** Description of 28 studies (32 comparisons) included in a systematic review of the effects of point-of-care computer reminders on physician behaviour (part 1 of 3) View this table: [Table2](http://www.cmaj.ca/content/182/5/E216/T2) **Table 1:** Description of 28 studies (32 comparisons) included in a systematic review of the effects of point-of-care computer reminders on physician behaviour (part 2 of 3) View this table: [Table3](http://www.cmaj.ca/content/182/5/E216/T3) **Table 1:** Description of 28 studies (32 comparisons) included in a systematic review of the effects of point-of-care computer reminders on physician behaviour (part 3 of 3) Figure 2 displays the median improvements in adherence to processes of care for each included study (for details about the results from each study, see Appendix 1, available at [www.cmaj.ca/cgi/content/full/cmaj.090578/DC1](http://www.cmaj.ca/cgi/content/full/cmaj.090578/DC1)). Pooling data across studies (Table 2), we found that the median improvement in adherence associated with computer reminders was 4.2% (IQR 0.8%–18.8%). Prescribing behaviours improved by a median of 3.3% (IQR 0.5%–10.6% [21 trials]), adherence to target vaccinations by 3.8% (IQR 0.5%–6.6% [6 trials]) and test-ordering behaviours by 3.8% (IQR 0.4%–16.3% [13 trials]). Table 2 also shows the results obtained when we used the best outcome from each study instead of the median improvement. View this table: [Table4](http://www.cmaj.ca/content/182/5/E216/T4) **Table 2:** Improvements in adherence to processes of care across the 28 studies (32 comparisons) included in the review ![Figure2](http://www.cmaj.ca/https://www.cmaj.ca/content/cmaj/182/5/E216/F2.medium.gif) [Figure2](http://www.cmaj.ca/content/182/5/E216/F2) **Figure 2:** Median absolute improvements in adherence to processes of care between intervention and control groups in each study. Each study is represented by the median and interquartile range for its reported outcomes; studies with single data points reported only one eligible outcome. Across eight comparisons that reported dichotomous clinical outcomes (e.g., achievement of target treatment goals), patients in the intervention groups experienced a median absolute improvement of 2.5% (IQR 1.3%–4.2%). For blood pressure control, the single most commonly reported outcome, patients in the intervention groups experienced a median reduction in systolic blood pressure of 1.0 mm Hg (IQR 2.3 mm Hg reduction to 2.0 mm Hg increase) and a median reduction in diastolic blood pressure of 0.2 mm Hg (IQR 0.8 mm Hg reduction to 1.0 mm Hg increase). ### Study features and effect size We found no significant correlation between effect size and the following study features: publication year, country (United States v. other), study design (randomized v. quasi-randomized) or sample size (whether calculated on the basis of patients or providers) (Figure 3). We considered that studies with high adherence rates in control groups (a marker for baseline adherence) might achieve smaller improvements in care, because they had smaller opportunities for improvement. Surprisingly, studies with control-group adherence rates that were higher than the median across all studies showed larger effect sizes (Figure 3). When we analyzed the potential impact of baseline adherence in various other ways (e.g., focusing on the highest and lowest quartiles of baseline adherence), we found no evidence that small improvements reflected high baseline quality of care. ![Figure3](http://www.cmaj.ca/https://www.cmaj.ca/content/cmaj/182/5/E216/F3.medium.gif) [Figure3](http://www.cmaj.ca/content/182/5/E216/F3) **Figure 3:** Median effects for adherence to processes of care by study feature. *Kruskall–Wallis test; all other *p* values reflect Mann–Whitney test. †Quasi-RCT refers to randomized controlled trials in which intervention status was assigned on the basis of an arbitrary but not truly random process, such as even or odd patient (or provider) identification numbers. ‡The total number of comparisons for the analysis of sample size is 31 because one study did not report the number of patients. §Studies classified as having no cointervention were those in which a computer reminder alone was compared with usual care; studies classified as having co-interventions were those in which the intervention group received a computer reminder plus one or more other quality improvement interventions, while the control group received those same quality improvement interventions but no computer reminder. We observed a trend toward larger improvements with inpatient interventions than with outpatient interventions (median 8.7% [IQR 2.7%–22.7%] v. 3.0% [IQR 0.6%–11.5%]; *p* = 0.34). All inpatient interventions occurred at two institutions that had well-developed, “homegrown” computerized systems for order entry by providers. Moreover, the recipients of computer reminders from these institutions consisted primarily of physician trainees. Our grouping of studies on the basis of track records in clinical informatics did not result in significant differences, except that the studies from Brigham and Women’s Hospital in Boston, USA, reported a median improvement of 16.8% (IQR 8.7%–26.0%), 26,31,37,40,46 compared with 3.0% (IQR 0.5%–11.5%) for studies from the other institutions (*p* = 0.04). ### Features of computer reminders and effect size We analyzed a number of reminder characteristics to look for associations with effect size (Figure 4). Only the requirement for providers to enter a response to the reminder showed a trend toward larger improvements (median 12.9% [IQR 2.7%–22.7%] v. 2.7% [IQR 0.6%–5.6%] for no response required; *p* = 0.09). No trends toward larger effect sizes existed based on the type of targeted problem (underuse v. overuse of a targeted process of care), inclusion of patient-specific information, provision of an explanation for the alert, inclusion of a specific recommendation with the alert, development of the reminder by the study authors, or the type of system used to deliver the reminder (CPOE [computerized provider order entry] v. electronic medical records). ![Figure4](http://www.cmaj.ca/https://www.cmaj.ca/content/cmaj/182/5/E216/F4.medium.gif) [Figure4](http://www.cmaj.ca/content/182/5/E216/F4) **Figure 4:** Median effects for adherence to processes of care by reminder feature. *Underuse = targeting improvements to increase the percentage of patients who receive targeted process of care (e.g., increasing the percentage of patients receiving the influenza vaccine); overuse = targeting improvements to reduce the percentage of patients receiving inappropriate care (e.g., reducing the percentage of patients who receive antibiotics for viral upper respiratory tract infections). †Reminders with no patient-specific information were those triggered on the basis of demographic characteristics (e.g., age) or the intent to order a medication or investigation irrespective of any features of the patient involved or patient-specific laboratory results. The sample size is reduced because of the inability to accurately assess the presence or absence of the feature. ‡Active delivery refers to reminders that appeared automatically when triggering conditions were met, as opposed to passive reminders, where, for instance, users might be presented with the option to click on a link to receive decision support related to their current task. §CPOE = computerized order entry system; reminder systems without CPOE were typically electronic medical record systems. Reminders that were “pushed” onto users (i.e., users automatically received the reminder) did not achieve larger effects than reminders that required users to perform some action to receive them (i.e., users had to “pull” the reminders); only 4 of the 32 comparisons involved “pull” reminders. A three-armed cluster randomized controlled trial of reminders for screening and treatment of hyperlipidemia 45 directly compared these two modes of delivering reminders. Patients cared for at practices randomly assigned to deliver automatic alerts were more likely to undergo testing for hyperlidemia and receive treatment than were patients at clinics where reminders were delivered to clinicians only “on demand.” ### Sensitivity analyses We re-analyzed the potential predictors of effect size (study features and characteristics of reminders) using a variety of choices for the representative outcome from each study, including the outcome with the middle value (rather than a calculated median) and the best outcome (the outcome associated with the largest improvement in adherence to the process). None of these analyses substantially altered the main findings. ## Interpretation Across the 32 comparisons, computer reminders achieved small to modest improvements in care, with a median improvement of 4.2% (IQR 0.8%–18.8%). Even using the best out-come from each trial, the median improvement was only 5.6% (IQR 2.0%–19.2%). These changes fall below the thresholds for clinically significant improvements specified in most trials, and they are certainly smaller than the improvements generally expected from computerized order entry and electronic medical record systems. Interestingly, these improvements are also no larger than those observed for paper-based reminders. 5,48 With the upper quartile of reported improvements beginning at an almost 20% increase in adherence to processes of care, some studies in our review clearly did show larger effects. However, we were unable to identify any study characteristic or reminder feature that predicted larger effect sizes, except for a statistically significant increase in magnitude of effect seen in studies involving a well-developed, homegrown computer order entry system at Brigham and Women’s Hospital. 26,31,37,40,46 A trend toward larger effects was also seen for reminders that required users to enter a response in order to proceed; however, this finding may have been confounded by the uneven distribution of studies from Brigham and Women’s Hospital. Thus, we do not know if the success of computer reminders at this institution reflects the design of reminders requiring user responses, other features of the computer system or perhaps institutional culture. Included studies often provided limited descriptions of key features of the reminders and the systems through which they were delivered. We attempted to overcome this problem by abstracting basic features, such as whether user responses were required and whether the reminder displayed a justification for its content. But heterogeneity within even these apparently straightforward categories could mask important differences in effect. Important differences in effect may also reflect characteristics that we found difficult to operationalize (e.g., the “complexity” of the reminder) or that were inadequately reported. This problem of limited descriptive detail of complex interventions and the resulting potential for heterogeneity among included interventions in systematic reviews has been consistently encountered in the quality-improvement literature. 49,50 Conventional meta-analyses estimate mean effects and 95% confidence intervals by calculating weighted averages across study results. The individual weights derive from study precision such that larger studies contribute greater weight to the meta-analytic result. However, more than half of the studies included in our review reported spuriously high precision, and most of the studies did not report the data required to adjust for this problem. For example, of the 26 clustered trials, only 9 provided a single value for the intra-cluster correlation coefficient, and only 3 reported values for all outcomes. Because we could not accurately weight studies based on precision, we focused on the median and interquartile range for study effects, a method that has found increasing application in systematic reviews of interventions for quality improvement. 9,13–15,17,18,51 The main potential drawback of this method is that we assigned equal weight to all of the studies. However, for our results to have substantially misrepresented the true impacts of computer reminders, the minority of studies with large magnitudes of effect would also have to be the larger studies (and thus deserving of greater weight in a meta-analysis). Not only is this unlikely in general, we specifically showed that study size bore no relation to effect size, using various definitions of study and effect size. ### Conclusion Computer reminders typically increased adherence to target processes of care by amounts below thresholds for clinically significant improvements. A minority of studies showed more substantial improvements, consistent with the expectations of those who advocate widespread adoption of computerized order entry and electronic medical record systems. However, until further research identifies study design and reminder features that reliably predict clinically worthwhile improvements in care, implementing these expensive technologies will constitute an expensive exercise in trial and error. ## Footnotes * Previously published at [www.cmaj.ca](http://www.cmaj.ca) See also research article by Villeneuve and colleagues This article has been peer reviewed. **Competing interests:** None declared. **Contributors:** Kaveh Shojania and Jeremy Grimshaw conceived the study. All of the authors contributed to refinements of the study design and to the analysis and interpretation of the data. Kaveh Shojania drafted the initial manuscript, and all of the other authors provided critical revisions. All of the authors approved the final manuscript submitted for publication. Kaveh Shojania is the guarantor for this paper. **Funding:** Kaveh Shojania and Jeremy Grimshaw received salary support from the Government of Canada Research Chairs Program. Craig Ramsay’s position in the Health Services Research Unit is funded in part by the Chief Scientist Office of the Scottish Government Health Department. Alain May-hew receives salary support from the Canadian Institutes of Health Research. The views expressed are those of the authors and not the funding agencies. ## REFERENCES 1. 1. Aspden P, Wolcott JA, Bootman JL, et al.; Committee on Identifying and Preventing Medication Errors. *Preventing medication errors: quality chasm series*. Washington (DC): The National Academies Press; 2006. 2. 2. Garg AX, Adhikari NK, McDonald H, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA 2005;293:1223–38. [CrossRef](http://www.cmaj.ca/lookup/external-ref?access_num=10.1001/jama.293.10.1223&link_type=DOI) [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=15755945&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) [Web of Science](http://www.cmaj.ca/lookup/external-ref?access_num=000227498400023&link_type=ISI) 3. 3. Hunt DL, Haynes RB, Hanna SE, et al. Effects of computer-based clinical decision support systems on physician performance and patient outcomes: a systematic review. JAMA 1998;280:1339–46. [CrossRef](http://www.cmaj.ca/lookup/external-ref?access_num=10.1001/jama.280.15.1339&link_type=DOI) [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=9794315&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) [Web of Science](http://www.cmaj.ca/lookup/external-ref?access_num=000076472500028&link_type=ISI) 4. 4. Kawamoto K, Houlihan CA, Balas EA, et al. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 2005;330:765. [Abstract/FREE Full Text](http://www.cmaj.ca/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6MzoiYm1qIjtzOjU6InJlc2lkIjtzOjEyOiIzMzAvNzQ5NC83NjUiO3M6NDoiYXRvbSI7czoyMToiL2NtYWovMTgyLzUvRTIxNi5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30=) 5. 5. Dexheimer JW, Talbot TR, Sanders DL, et al. Prompting clinicians about preventive care measures: a systematic review of randomized controlled trials. J Am Med Inform Assoc 2008;15:311–20. [Abstract/FREE Full Text](http://www.cmaj.ca/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NzoiamFtaW5mbyI7czo1OiJyZXNpZCI7czo4OiIxNS8zLzMxMSI7czo0OiJhdG9tIjtzOjIxOiIvY21hai8xODIvNS9FMjE2LmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 6. 6. Whiting-O’Keefe QE, Henke C, Simborg DW. Choosing the correct unit of analysis in medical care experiments. Med Care 1984;22:1101–14. [CrossRef](http://www.cmaj.ca/lookup/external-ref?access_num=10.1097/00005650-198412000-00005&link_type=DOI) [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=6513619&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) [Web of Science](http://www.cmaj.ca/lookup/external-ref?access_num=A1984TY94600005&link_type=ISI) 7. 7. Donner A, Donald A. Analysis of data arising from a stratified design with the cluster as unit of randomization. Stat Med 1987;6:43–52. [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=3576016&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) [Web of Science](http://www.cmaj.ca/lookup/external-ref?access_num=A1987G266200004&link_type=ISI) 8. 8. Shojania KG, Ranji SR, McDonald KM, et al. Effects of quality improvement strategies for type 2 diabetes on glycemic control: a meta-regression analysis. JAMA 2006;296:427–40. [CrossRef](http://www.cmaj.ca/lookup/external-ref?access_num=10.1001/jama.296.4.427&link_type=DOI) [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=16868301&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) [Web of Science](http://www.cmaj.ca/lookup/external-ref?access_num=000239242500029&link_type=ISI) 9. 9. Grimshaw JM, Thomas RE, MacLennan G, et al. Effectiveness and efficiency of guideline dissemination and implementation strategies. *Health Technol Assess* 2004;8:iii–iv, 1–72. 10. 10. Doumit G, Gattellari M, Grimshaw J, et al. Local opinion leaders: effects on professional practice and health care outcomes [review]. *Cochrane Database Syst Rev* 2007;(1):CD000125. 11. 11. Farmer AP, Legare F, Turcot L, et al. Printed educational materials: effects on professional practice and health care outcomes [review]. *Cochrane Database Syst Rev* 2008;(3):CD004398. 12. 12. Forsetlund L, Bjorndal A, Rashidian A, et al. Continuing education meetings and workshops: effects on professional practice and health care outcomes [review]. *Cochrane Database Syst Rev* 2009;(2)CD003030. 13. 13. Jamtvedt G, Young JM, Kristoffersen DT, et al. Audit and feedback: effects on professional practice and health care outcomes [review]. *Cochrane Database Syst Rev* 2006;(2)CD000259. 14. 14. O’Brien MA, Rogers S, Jamtvedt G, et al. Educational outreach visits: effects on professional practice and health care outcomes [review]. *Cochrane Database Syst Rev* 2007;(4):CD000409. 15. 15. Ranji SR, Steinman MA, Shojania KG, et al. Interventions to reduce unnecessary antibiotic prescribing: a systematic review and quantitative analysis. Med Care 2008;46:847–62. [CrossRef](http://www.cmaj.ca/lookup/external-ref?access_num=10.1097/MLR.0b013e318178eabd&link_type=DOI) [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=18665065&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) [Web of Science](http://www.cmaj.ca/lookup/external-ref?access_num=000258014400012&link_type=ISI) 16. 16. Shojania KG, McDonald KM, Wachter RM, et al., editors. *Closing the quality gap: a critical analysis of quality improvement strategies. Volume 1 — series overview and methodology* [technical review 9; AHRQ publication no. 04-0051-1]. Rockville (MD): Agency for Healthcare Research and Quality; 2004. Available: [www.ahrq.gov/downloads/pub/evidence/pdf/qualgap1/qualgap1.pdf](http://www.ahrq.gov/downloads/pub/evidence/pdf/qualgap1/qualgap1.pdf) (accessed 2009 Nov. 26). 17. 17. Steinman MA, Ranji SR, Shojania KG, et al. Improving antibiotic selection: a systematic review and quantitative analysis of quality improvement strategies. Med Care 2006;44:617–28. [CrossRef](http://www.cmaj.ca/lookup/external-ref?access_num=10.1097/01.mlr.0000215846.25591.22&link_type=DOI) [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=16799356&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) [Web of Science](http://www.cmaj.ca/lookup/external-ref?access_num=000238806300003&link_type=ISI) 18. 18. Walsh JM, McDonald KM, Shojania KG, et al. Quality improvement strategies for hypertension management: a systematic review. Med Care 2006;44:646–57. [CrossRef](http://www.cmaj.ca/lookup/external-ref?access_num=10.1097/01.mlr.0000220260.30768.32&link_type=DOI) [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=16799359&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) [Web of Science](http://www.cmaj.ca/lookup/external-ref?access_num=000238806300006&link_type=ISI) 19. 19. Bates DW, Kuperman GJ, Rittenberg E, et al. A randomized trial of a computer-based intervention to reduce utilization of redundant laboratory tests. Am J Med 1999;106:144–50. [CrossRef](http://www.cmaj.ca/lookup/external-ref?access_num=10.1016/S0002-9343(98)00410-0&link_type=DOI) [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=10230742&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) [Web of Science](http://www.cmaj.ca/lookup/external-ref?access_num=000078904000004&link_type=ISI) 20. 20. Christakis DA, Zimmerman FJ, Wright JA, et al. A randomized controlled trial of point-of-care evidence to improve the antibiotic prescribing practices for otitis media in children. Pediatrics 2001;107:E15. [CrossRef](http://www.cmaj.ca/lookup/external-ref?access_num=10.1542/peds.107.2.e15&link_type=DOI) [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=11158489&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) 21. 21. Dexter PR, Perkins S, Overhage JM, et al. A computerized reminder system to increase the use of preventive care for hospitalized patients. N Engl J Med 2001;345:965–70. [CrossRef](http://www.cmaj.ca/lookup/external-ref?access_num=10.1056/NEJMsa010181&link_type=DOI) [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=11575289&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) [Web of Science](http://www.cmaj.ca/lookup/external-ref?access_num=000171170000006&link_type=ISI) 22. 22. Eccles M, McColl E, Steen N, et al. Effect of computerised evidence based guidelines on management of asthma and angina in adults in primary care: cluster randomised controlled trial. BMJ 2002;325:941. [Abstract/FREE Full Text](http://www.cmaj.ca/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6MzoiYm1qIjtzOjU6InJlc2lkIjtzOjEyOiIzMjUvNzM3MC85NDEiO3M6NDoiYXRvbSI7czoyMToiL2NtYWovMTgyLzUvRTIxNi5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30=) 23. 23. Filippi A, Sabatini A, Badioli L, et al. Effects of an automated electronic reminder in changing the antiplatelet drug-prescribing behavior among Italian general practitioners in diabetic patients: an intervention trial. Diabetes Care 2003;26:1497–500. [Abstract/FREE Full Text](http://www.cmaj.ca/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NzoiZGlhY2FyZSI7czo1OiJyZXNpZCI7czo5OiIyNi81LzE0OTciO3M6NDoiYXRvbSI7czoyMToiL2NtYWovMTgyLzUvRTIxNi5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30=) 24. 24. Flottorp S, Oxman AD, Havelsrud K, et al. Cluster randomised controlled trial of tailored interventions to improve the management of urinary tract infections in women and sore throat. BMJ 2002;325:367. [Abstract/FREE Full Text](http://www.cmaj.ca/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6MzoiYm1qIjtzOjU6InJlc2lkIjtzOjEyOiIzMjUvNzM2MC8zNjciO3M6NDoiYXRvbSI7czoyMToiL2NtYWovMTgyLzUvRTIxNi5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30=) 25. 25. Frank O, Litt J, Beilby J. Opportunistic electronic reminders. Improving performance of preventive care in general practice. Aust Fam Physician 2004;33:87–90. [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=14988972&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) 26. 26. Hicks LS, Sequist TD, Ayanian JZ, et al. Impact of computerized decision support on blood pressure management and control: a randomized controlled trial. J Gen Intern Med 2008;23:429–41. [CrossRef](http://www.cmaj.ca/lookup/external-ref?access_num=10.1007/s11606-007-0403-1&link_type=DOI) [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=18373141&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) [Web of Science](http://www.cmaj.ca/lookup/external-ref?access_num=000254456400014&link_type=ISI) 27. 27. Judge J, Field TS, DeFlorio M, et al. Prescribers’ responses to alerts during medication ordering in the long term care setting. J Am Med Inform Assoc 2006;13:385–90. [Abstract/FREE Full Text](http://www.cmaj.ca/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NzoiamFtaW5mbyI7czo1OiJyZXNpZCI7czo4OiIxMy80LzM4NSI7czo0OiJhdG9tIjtzOjIxOiIvY21hai8xODIvNS9FMjE2LmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 28. 28. Kenealy T, Arroll B, Petrie KJ. Patients and computers as reminders to screen for diabetes in family practice. Randomized-controlled trial. J Gen Intern Med 2005; 20:916–21. [CrossRef](http://www.cmaj.ca/lookup/external-ref?access_num=10.1111/j.1525-1497.2005.0197.x&link_type=DOI) [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=16191138&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) [Web of Science](http://www.cmaj.ca/lookup/external-ref?access_num=000232049300013&link_type=ISI) 29. 29. Kralj B, Iverson D, Hotz K, et al. The impact of computerized clinical reminders on physician prescribing behavior: evidence from community oncology practice. Am J Med Qual 2003;18:197–203. [Abstract/FREE Full Text](http://www.cmaj.ca/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NToic3Bham0iO3M6NToicmVzaWQiO3M6ODoiMTgvNS8xOTciO3M6NDoiYXRvbSI7czoyMToiL2NtYWovMTgyLzUvRTIxNi5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30=) 30. 30. Krall MA, Traunweiser K, Towery W. Effectiveness of an electronic medical record clinical quality alert prepared by off-line data analysis. Stud Health Technol Inform 2004;107:135–9. [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=15360790&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) 31. 31. Kucher N, Koo S, Quiroz R, et al. Electronic alerts to prevent venous thromboembolism among hospitalized patients. N Engl J Med 2005;352:969–77. [CrossRef](http://www.cmaj.ca/lookup/external-ref?access_num=10.1056/NEJMoa041533&link_type=DOI) [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=15758007&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) [Web of Science](http://www.cmaj.ca/lookup/external-ref?access_num=000227491200005&link_type=ISI) 32. 32. McCowan C, Neville RG, Ricketts IW, et al. Lessons from a randomized controlled trial designed to evaluate computer decision support software to improve the management of asthma. Med Inform Internet Med 2001;26:191–201. [CrossRef](http://www.cmaj.ca/lookup/external-ref?access_num=10.1080/14639230110067890&link_type=DOI) [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=11706929&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) [Web of Science](http://www.cmaj.ca/lookup/external-ref?access_num=000171878100004&link_type=ISI) 33. 33. Meigs JB, Cagliero E, Dubey A, et al. A controlled trial of web-based diabetes disease management: the MGH diabetes primary care improvement project. Diabetes Care 2003;26:750–7. [Abstract/FREE Full Text](http://www.cmaj.ca/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NzoiZGlhY2FyZSI7czo1OiJyZXNpZCI7czo4OiIyNi8zLzc1MCI7czo0OiJhdG9tIjtzOjIxOiIvY21hai8xODIvNS9FMjE2LmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 34. 34. Overhage JM, Tierney WM, McDonald CJ. Computer reminders to implement preventive care guidelines for hospitalized patients. Arch Intern Med 1996;156:1551–6. [CrossRef](http://www.cmaj.ca/lookup/external-ref?access_num=10.1001/archinte.1996.00440130095010&link_type=DOI) [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=8687263&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) [Web of Science](http://www.cmaj.ca/lookup/external-ref?access_num=A1996UY33200008&link_type=ISI) 35. 35. Overhage JM, Tierney WM, Zhou XH, et al. A randomized trial of “corollary orders” to prevent errors of omission. J Am Med Inform Assoc 1997;4:364–75. [Abstract/FREE Full Text](http://www.cmaj.ca/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NzoiamFtaW5mbyI7czo1OiJyZXNpZCI7czo3OiI0LzUvMzY0IjtzOjQ6ImF0b20iO3M6MjE6Ii9jbWFqLzE4Mi81L0UyMTYuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9) 36. 36. Peterson JF, Rosenbaum BP, Waitman LR, et al. Physicians’ response to guided geriatric dosing: initial results from a randomized trial. Stud Health Technol Inform 2007;129:1037–40. [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=17911873&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) 37. 37. Rothschild JM, McGurk S, Honour M, et al. Assessment of education and computerized decision support interventions for improving transfusion practice. Transfusion 2007;47:228–39. [CrossRef](http://www.cmaj.ca/lookup/external-ref?access_num=10.1111/j.1537-2995.2007.01093.x&link_type=DOI) [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=17302768&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) [Web of Science](http://www.cmaj.ca/lookup/external-ref?access_num=000243638400011&link_type=ISI) 38. 38. Roumie CL, Elasy TA, Greevy R, et al. Improving blood pressure control through provider education, provider alerts, and patient education: a cluster randomized trial. Ann Intern Med 2006;145:165–75. [CrossRef](http://www.cmaj.ca/lookup/external-ref?access_num=10.7326/0003-4819-145-3-200608010-00004&link_type=DOI) [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=16880458&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) [Web of Science](http://www.cmaj.ca/lookup/external-ref?access_num=000239526800002&link_type=ISI) 39. 39. Safran C, Rind DM, Davis RB, et al. A clinical trial of a knowledge-based medical record. Medinfo 1995;8:1076–80. [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=8591371&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) 40. 40. Sequist TD, Gandhi TK, Karson AS, et al. A randomized trial of electronic clinical reminders to improve quality of care for diabetes and coronary artery disease. J Am Med Inform Assoc 2005;12:431–7. [Abstract/FREE Full Text](http://www.cmaj.ca/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NzoiamFtaW5mbyI7czo1OiJyZXNpZCI7czo4OiIxMi80LzQzMSI7czo0OiJhdG9tIjtzOjIxOiIvY21hai8xODIvNS9FMjE2LmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 41. 41. Tamblyn R, Huang A, Perreault R, et al. The medical office of the 21st century (MOXXI): effectiveness of computerized decision-making support in reducing inappropriate prescribing in primary care. CMAJ 2003;169:549–56. [Abstract/FREE Full Text](http://www.cmaj.ca/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NDoiY21haiI7czo1OiJyZXNpZCI7czo5OiIxNjkvNi81NDkiO3M6NDoiYXRvbSI7czoyMToiL2NtYWovMTgyLzUvRTIxNi5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30=) 42. 42. Tape TG, Campbell JR. Computerized medical records and preventive health care: success depends on many factors. Am J Med 1993;94:619–25. [CrossRef](http://www.cmaj.ca/lookup/external-ref?access_num=10.1016/0002-9343(93)90214-A&link_type=DOI) [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=8506888&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) [Web of Science](http://www.cmaj.ca/lookup/external-ref?access_num=A1993LG67600010&link_type=ISI) 43. 43. Tierney WM, Overhage JM, Murray MD, et al. Effects of computerized guidelines for managing heart disease in primary care. J Gen Intern Med 2003;18:967–76. [CrossRef](http://www.cmaj.ca/lookup/external-ref?access_num=10.1111/j.1525-1497.2003.30635.x&link_type=DOI) [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=14687254&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) [Web of Science](http://www.cmaj.ca/lookup/external-ref?access_num=000187450400001&link_type=ISI) 44. 44. Tierney WM, Overhage JM, Murray MD, et al. Can computer-generated evidence-based care suggestions enhance evidence-based management of asthma and chronic obstructive pulmonary disease? A randomized, controlled trial. Health Serv Res 2005;40:477–97. [CrossRef](http://www.cmaj.ca/lookup/external-ref?access_num=10.1111/j.1475-6773.2005.0t369.x&link_type=DOI) [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=15762903&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) [Web of Science](http://www.cmaj.ca/lookup/external-ref?access_num=000235590000012&link_type=ISI) 45. 45. van Wyk JT, van Wijk MA, Sturkenboom MC, et al. Electronic alerts versus on-demand decision support to improve dyslipidemia treatment: a cluster randomized controlled trial. Circulation 2008;117:371–8. [Abstract/FREE Full Text](http://www.cmaj.ca/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6MTQ6ImNpcmN1bGF0aW9uYWhhIjtzOjU6InJlc2lkIjtzOjk6IjExNy8zLzM3MSI7czo0OiJhdG9tIjtzOjIxOiIvY21hai8xODIvNS9FMjE2LmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 46. 46. Zanetti G, Flanagan HL Jr, Cohn LH, et al. Improvement of intraoperative antibiotic prophylaxis in prolonged cardiac surgery by automated alerts in the operating room. Infect Control Hosp Epidemiol 2003;24:13–6. [CrossRef](http://www.cmaj.ca/lookup/external-ref?access_num=10.1086/502109&link_type=DOI) [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=12558230&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) [Web of Science](http://www.cmaj.ca/lookup/external-ref?access_num=000180447900004&link_type=ISI) 47. 47. Shojania KG, Jennings A, Mayhew A, et al. The effects of on-screen, point of care computer reminders on processes and outcomes of care [review]. *Cochrane Database Syst Rev* 2009;(3)CD001096. 48. 48. Balas EA, Weingarten S, Garb CT, et al. Improving preventive care by prompting physicians. Arch Intern Med 2000;160:301–8. [CrossRef](http://www.cmaj.ca/lookup/external-ref?access_num=10.1001/archinte.160.3.301&link_type=DOI) [PubMed](http://www.cmaj.ca/lookup/external-ref?access_num=10668831&link_type=MED&atom=%2Fcmaj%2F182%2F5%2FE216.atom) [Web of Science](http://www.cmaj.ca/lookup/external-ref?access_num=000085134000004&link_type=ISI) 49. 49. Shojania KG, Grimshaw JM. Evidence-based quality improvement: the state of the science. *Health Aff (Millwood)* 2005;24:138–50. 50. 50. Grimshaw J, McAuley LM, Bero LA, et al. Systematic reviews of the effectiveness of quality improvement strategies and programmes. Qual Saf Health Care 2003; 12:298–303. [Abstract/FREE Full Text](http://www.cmaj.ca/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6MzoicWhjIjtzOjU6InJlc2lkIjtzOjg6IjEyLzQvMjk4IjtzOjQ6ImF0b20iO3M6MjE6Ii9jbWFqLzE4Mi81L0UyMTYuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9) 51. 51. Shojania KG, Ranji SR, Shaw LK, et al. *Diabetes mellitus care*. Vol. 2 of Shojania KG, McDonald KM, Wachter RM, et al., editors. *Closing the quality gap: a critical analysis of quality improvement strategies* [technical review 9; AHRQ publication no. 04-0051-2]. Rockville (MD): Agency for Healthcare Research and Quality; 2004. Available: [www.ahrq.gov/downloads/pub/evidence/pdf/qualgap2/qualgap2.pdf](http://www.ahrq.gov/downloads/pub/evidence/pdf/qualgap2/qualgap2.pdf) (accessed 2009 Nov. 26).