Skip to main content

Main menu

  • Home
  • Content
    • Current issue
    • Past issues
    • Early releases
    • Collections
    • Sections
    • Blog
    • Infographics & illustrations
    • Podcasts
    • COVID-19 Articles
  • Authors & Reviewers
    • Overview for authors
    • Submission guidelines
    • Submit a manuscript
    • Forms
    • Editorial process
    • Editorial policies
    • Peer review process
    • Publication fees
    • Reprint requests
    • Open access
    • Patient engagement
  • Members & Subscribers
    • Benefits for CMA Members
    • CPD Credits for Members
    • Subscribe to CMAJ Print
    • Subscription Prices
  • Alerts
    • Email alerts
    • RSS
  • JAMC
    • À propos
    • Numéro en cours
    • Archives
    • Sections
    • Abonnement
    • Alertes
    • Trousse média 2023
  • CMAJ JOURNALS
    • CMAJ Open
    • CJS
    • JAMC
    • JPN

User menu

Search

  • Advanced search
CMAJ
  • CMAJ JOURNALS
    • CMAJ Open
    • CJS
    • JAMC
    • JPN
CMAJ

Advanced Search

  • Home
  • Content
    • Current issue
    • Past issues
    • Early releases
    • Collections
    • Sections
    • Blog
    • Infographics & illustrations
    • Podcasts
    • COVID-19 Articles
  • Authors & Reviewers
    • Overview for authors
    • Submission guidelines
    • Submit a manuscript
    • Forms
    • Editorial process
    • Editorial policies
    • Peer review process
    • Publication fees
    • Reprint requests
    • Open access
    • Patient engagement
  • Members & Subscribers
    • Benefits for CMA Members
    • CPD Credits for Members
    • Subscribe to CMAJ Print
    • Subscription Prices
  • Alerts
    • Email alerts
    • RSS
  • JAMC
    • À propos
    • Numéro en cours
    • Archives
    • Sections
    • Abonnement
    • Alertes
    • Trousse média 2023
  • Visit CMAJ on Facebook
  • Follow CMAJ on Twitter
  • Follow CMAJ on Pinterest
  • Follow CMAJ on Youtube
  • Follow CMAJ on Instagram
Review

Monitoring use of knowledge and evaluating outcomes

Sharon E. Straus, Jacqueline Tetroe, Ian D. Graham, Merrick Zwarenstein, Onil Bhattacharyya and Sasha Shepperd
CMAJ February 09, 2010 182 (2) E94-E98; DOI: https://doi.org/10.1503/cmaj.081335
Sharon E. Straus
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jacqueline Tetroe
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Ian D. Graham
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Merrick Zwarenstein
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Onil Bhattacharyya
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sasha Shepperd
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Tables
  • Responses
  • Metrics
  • PDF
Loading

Monitoring use of knowledge

In the knowledge-to-action cycle, after the intervention related to knowledge translation has been implemented, uptake of knowledge should be monitored. 1 This step is necessary to determine how and to what extent the knowledge is used by the decision-makers. 1 How we measure uptake of knowledge depends on our definitions of knowledge and use of knowledge and on the perspective of the user of knowledge. In this paper, we discuss approaches to monitoring use of knowledge and evaluating its impact, based on a systematic review of the literature.

Several classifications exist for use of knowledge. 2–6 We find it useful to consider conceptual, instrumental and persuasive use of knowledge. 1 Conceptual use of knowledge implies changes in knowledge, understanding or attitudes. Research could change thinking and inform decision-making but not change practice. For example, based on knowledge that self-monitoring of blood glucose in newly diagnosed patients with type 2 diabetes mellitus is not cost-effective and is associated with lower quality of life, 7,8 we understand a newly diagnosed patient’s concern about self-monitoring.

Instrumental use of knowledge is the concrete application of knowledge and describes changes in behaviour or practice. 1 Knowledge can be translated into a usable form, such as a pathway for care, and is used in making a specific decision. For example, we could measure how often a clinician orders prophylaxis for deep venous thrombosis in appropriate patients admitted to the intensive care unit.

Persuasive use of knowledge is also called strategic or symbolic use of knowledge and refers to research being used as a political or persuasive tool. It relates to the use of knowledge to attain specific power or profit (i.e., knowledge as ammunition). 1 For example, we use our knowledge of adverse events associated with use of mechanical restraints on agitated inpatients to persuade the nursing manager on the medical ward to develop a ward protocol about their use.

How can use of knowledge be measured?

Many tools exist for assessing use of knowledge. Dunn 3 completed an inventory of tools for conducting research on use of knowledge and identified 65 strategies, but most have unknown validity or reliability. Most frequently, tools for the utilization of knowledge measure instrumental use of knowledge. 9 Often these measures rely on self-report and are subject to recall bias. For example, a case study described adoption by call centre nurses of a protocol for decision-making support. 10 Eleven of 25 nurses who were surveyed said they used the tool in practice. Potential limitations to this study include recall bias and a short period of follow-up (i.e., one month) without repeated observation. 10 In a more valid assessment of instrumental use of knowledge, participants underwent a quality-based assessment of their coaching skills during simulated calls to determine how often the protocol for decision-making support was used. 11

Assessing instrumental use of knowledge can also be done by measuring adherence to recommendations or quality indicators. Grol 12 completed a series of studies involving family physicians in the Netherlands who recorded their adherence to 30 national guidelines. Three hundred forty-two indicators of adherence were constructed and physicians received educational sessions on how to record their performance on these indicators. Computer software was developed to relate performance to clinical conditions to assess adherence. 12 More simply, we could look at how often we prescribe β-blockers in appropriate patients with heart failure through a chart-based audit.

We also need to consider who the targets are for use of knowledge (i.e., the public, health care professionals, policy-makers), because they may require different strategies for monitoring use of knowledge. Assessing use of knowledge by policy-makers may require strategies such as interviews and analysis of documents (e.g., reviewing policies to assess use of evidence). 13 When assessing use of knowledge by physicians, we could measure use of paths of care or ordering of relevant medications, which are often measured through use of administrative or clinical databases. Also, when measuring use of knowledge by the public, we could measure attitudes of patients through surveys or use of resources through administrative databases.

What is the target level of use of knowledge that we are aiming for? This target is based on discussions with stakeholders and includes consideration of what is acceptable and feasible and whether a ceiling effect may exist. 14 If the degree of use of knowledge is found to be adequate, strategies for monitoring sustained use of knowledge should be considered. If the degree of use of knowledge is less than expected or desired, reassessment of barriers to uptake may be necessary.

When should we measure use of knowledge versus the impact of use of knowledge? If the implementation-related intervention targets a behaviour for which a strong evidence of benefit exists, measuring the impact of the intervention in terms of whether the behaviour has occurred, rather than whether a change in clinical outcomes has occurred, may be appropriate. 15 A strategy to implement the guidelines of Osteoporosis Canada in a community setting was recently studied. 16 The primary outcome of this randomized trial was appropriate use of medications for osteoporosis (i.e., instrumental knowledge) rather than fractures in patients (i.e., clinical outcome). The researchers felt that, because sufficient evidence exists to support use of medication for osteoporosis to prevent fragility fractures, they did not need to measure fractures as the primary outcome. In such instances, measurement of outcomes at the patient level could be prohibitively expensive, but failure to measure at the patient level does not address whether the intervention improves relevant clinical outcomes.

Evaluating the impact of use of knowledge

The next phase of the knowledge-to-action cycle is to determine the impact of use of knowledge on outcomes specific to health, provider and system. 1 Although assessing use of knowledge is important, its use is of particular interest if it influences important clinical tools of measure such as quality indicators.

Evaluation should start with formulating the question. We find using the PICO framework 17 to be useful. Using this framework, the “P” refers to the population of interest, which could be the public, health care providers or policy-makers. The “I” refers to the intervention that was implemented and that might be compared with another group (i.e., “C”). The “O” refers to the outcome of interest, which could refer to health-related, provider-related or organizational outcomes.

The above strategies for considering use of knowledge can be used to frame outcomes. Donabedian 18 proposed a framework for considering quality of care that separates quality into structure (i.e., the characteristics of the setting that have an impact on care), process (i.e., the action that is done to the patient) and outcome (i.e., the status of the patient after the care-related intervention). A framework for differentiating use of knowledge from outcomes is provided in Table 1. 18 Structural indicators focus on organizational aspects of provision of service, which could be analogous to instrumental use of knowledge. Process-related indicators focus on care delivered to patients and include instances when evidence is communicated to patients and caregivers (i.e., instrumental knowledge).

View this table:
  • View inline
  • View popup
  • Download powerpoint

Table 1: Measures and impact of use of knowledge

Outcome-related indicators refer to the ultimate goal of care, such as the quality of life of patients or admission to hospital. An example is the issue of prophylaxis for deep venous thrombosis in patients admitted to the intensive care unit. Structural measures include the availability of prophylaxis for deep venous thrombosis (e.g., low-molecular-weight heparin and intermittent pneumatic compression) at the institution (i.e., instrumental use of knowledge). Process-related measures include whether prophylaxis for deep venous thrombosis, such as low-molecular-weight heparin, is prescribed in appropriate patients in the intensive care unit (i.e., instrumental use of knowledge). Outcome-related measures include the proportion of patients in the intensive care unit who develop a deep venous thrombosis.

Implementation of interventions designed to improve predetermined outcomes may also have unintended consequences (i.e., impacts that were not anticipated). Therefore, monitoring outcomes over the long term is wise. For example, implementation of computerized systems for entry of orders by prescribers has been found to be associated with adverse events as well as to reduce errors related to medication. 19

Methods for evaluating interventions

The question should be matched to the appropriate study design. When developing an evaluation, we need to consider rigour and feasibility. By rigour, we mean that the strategy for evaluation should use explicit and valid methods. Both qualitative and quantitative methods could be used. By feasible, we mean that the strategy for evaluation should be realistic and appropriate given the setting.

Selection of a strategy for evaluation also depends on whether we want to enhance local knowledge or provide generalizable information on the validity of the intervention related to knowledge translation. Those interested in local applicability of knowledge (i.e., whether an intervention worked or not in the context in which it was implemented) should use the most rigorous study designs feasible. These designs may include observational evaluations, in which the researcher does not have control over allocation of study participants to the intervention or a comparable control. Those interested in generalizable knowledge (i.e., whether an intervention is likely to work in comparable settings) should use the most rigorous design for evaluation-specific research that they can afford, such as randomized trials or experimental evaluation. A third form of evaluation to consider is process-related evaluation. This form of evaluation may involve determining the extent to which decision-makers were exposed to the intervention. Additionally, it may include a description of the experience of those exposed to the intervention and potential barriers to the intervention.

For example, a study evaluating the effectiveness of an educational intervention on the use of radiography for diagnosis of acute ankle injuries showed that the dissemination of the Ottawa ankle rules had no impact. However, less than a third of those receiving the intervention were actually physicians who had authority to order x-rays. This fact raises questions about whether the intervention was not effective or simply not directed to the appropriate decision-makers. 20 This type of evaluation is also useful because it allows corrections to be made to the intervention.

Qualitative methods of evaluation can be helpful in exploring the “active ingredients” of an intervention related to knowledge translation and thus they are particularly useful in process-specific evaluation. In a randomized trial of a comprehensive, multifaceted strategy for implementation of guidelines for family physicians, no changes in testing of cholesterol were noted after a one-year intervention. 21 This finding led to interviews with family physicians who expressed concern about the extra workload associated with implementation of the guidelines and suggested revisions to the diagnostic algorithm. 22

Quantitative evaluation methods included randomized and quasi-experimental studies. Randomized trials are more logistically demanding but provide more reliable results than non-randomized studies. Nonrandomized studies often can be implemented more easily and are appropriate when randomization is not possible.

Framework for evaluating complex interventions

Mixed methods can be used to evaluate complex interventions. To some extent, all interventions can be seen as complex. The relatively simple act of prescribing a pill is accompanied by a series of steps to ensure adherence and check for adverse effects and drug interactions. The key active ingredient, the pill, is readily identified. For more complex interventions, identifying the precise mechanism that may contribute to outcome is difficult because these interventions contain a number of different elements that act independently or interdependently. 23 An example is systems of care to optimize health outcomes for patients recovering from a stroke. Stroke units, compared with less organized forms of inpatient care, are effective in improving the survival of patients who have had a stroke and reducing their level of dependency. 24 The elements of a stroke unit that are associated with a beneficial outcome are not obvious from the trials included in this systematic review.

Recently, complex interventions have been a focus of debate because evidence has shown a beneficial effect for some complex interventions and not others. This discrepancy has led decision-makers to question which elements of an intervention are essential, and whether, when a trial has shown no effect, the cause is related to problems with the design or conduct of the study. One of the most influential initiatives to address this challenge is the Medical Research Council framework for the evaluation of complex interventions. 25 This framework provides researchers with an iterative, step-wise approach to evaluating a complex intervention.

The first step in this framework is defining the intervention, which involves identifying the existing evidence and any theoretical basis for the intervention so that the components of the intervention can be described. The second step is an exploratory phase in which the acceptability and feasibility of delivering the intervention and the comparison intervention are assessed and the study design is piloted. The third step is an explanatory phase, during which the final design of the trial is implemented in a relevant setting with appropriate criteria for eligibility, taking into account statistical power and relevant measures of outcome. Finally, the fourth step is a pragmatic phase in which the implementation of the intervention is examined with attention to the fidelity of the intervention, participants eligible for the intervention and any possible adverse effects. 23

Knowledge translation, complex interventions and the iterative loop

The framework of the Medical Research Council can be used to facilitate the translation of evidence by providing a mechanism for integrating additional forms of evidence relevant to decision-makers, such as qualitative or survey-derived data. In a survey of trialists contributing data to the systematic review of stroke units, 25 stroke units appeared to act as a focal point for the organization and coordination of services rather than a centre for intensive rehabilitation. A common feature of stroke units in the survey was that care was organized and coordinated by a multidisciplinary team of staff who were interested or knowledgeable about stroke. The stroke units also encouraged the involvement of caregivers. 26

A qualitative study 27 was conducted in parallel with a trial of intensive case management for people with severe mental illness. The study investigated the active ingredients of the intervention with attention to the roles of staff, practices and organizational features. Providing a comprehensive assessment and needs-led service were regarded as the key mechanisms of this intervention. Organizational features, such as an absence of team-management, limited the extent to which case managers could make an impact. Finally, the degree to which an intervention has been sustained outside the trial can be explored, such as by assessing the volume and type of patients using an admission-avoidance hospital-at-home program after the completion of a randomized trial. 28

At each phase of research on interventions for knowledge translation, input should be obtained from policy-makers, clinicians and managers in health care. Involving decision-makers in shaping the question and defining the intervention can help to ensure the relevance of research. Input from decision-makers has the potential to strengthen the generalizability of the research. Local applicability is a key factor influencing the use of evidence, and identifying the variables that define the context of the findings of research can help decision-makers address this factor. 29

The importance of the generalizability of complex interventions has recently received attention, with the development of standards to improve the quality and relevance of research. 30,31 These standards focus on the contextual variables affecting the delivery of an intervention. The link between knowledge translation and generalizability should be further explored to ensure that attributes identified as important by decision-makers in health care are considered by researchers. These factors include data on accessibility, the risk of adverse events, 32 cost-effectiveness and the sustainability of interventions. Relatively little attention has been paid to the sustainability of interventions in contrast with the initial implementation of a strategy for knowledge translation.

What are the gaps in knowledge in this area?

Several areas for potential research exist, including the development and evaluation of tools for measuring use of knowledge outside of instrumental use of knowledge. Enhanced methods for exploring and assessing sustained use of knowledge should also be developed.

    Key points

  • Use of knowledge can be instrumental (i.e., concrete application), conceptual (i.e., changes in understanding or attitude) or persuasive (i.e., as ammunition).

  • Although use of knowledge is important, the impact of its use on outcomes related to patients, providers and systems is of greatest interest.

  • Strategies for evaluating implementation of knowledge should use explicit and rigorous methods and consider both qualitative and quantitative methodologies.

Articles to date in this series

  • Straus SE, Tetroe J, Graham ID. Defining knowledge translation. www.cmaj.ca/cgi/doi/10.1503/cmaj.081229

  • Brouwers M, Stacey D, O’Connor A. Knowledge creation: synthesis, tools and products. www.cmaj.ca/cgi/doi/10.1503/cmaj.081230

  • Kitson A, Straus SE. The knowledge-to-action cycle: identifying the gaps. www.cmaj.ca/cgi/doi/10.1503/cmaj.081231

  • Harrison MB, Légaré F. Adapting clinical practice guidelines to local context and assessing barriers to their use. www.cmaj.ca/cgi/doi/10.1503/cmaj.081232

  • Wensing M, Bosch M, Grol R. Developing and selecting interventions for translating knowledge to action. www.cmaj.ca/cgi/doi/10.1503/cmaj.081233

  • Davis D, Davis N. Selecting educational interventions for knowledge translation. www.cmaj.ca/cgi/doi/10.1503/cmaj.081335

Footnotes

  • This article has been peer reviewed.

    Competing interests: Sharon Straus is an associate editor for ACP Journal Club and Evidence-Based Medicine and is on the advisory board of BMJ Group. None declared for Jacqueline Tetroe, Ian Graham, Merrick Zwarenstein, Onil Bhattacharyya or Sasha Shepperd.

    Sharon Straus is section editor of Reviews at CMAJ and was not involved in the editorial decision-making process for this article.

    Contributors: All of the authors were involved in the development of the concepts in the manuscript and the drafting of the manuscript, and all of them approved the final version submitted for publication.

    The book Knowledge Translation in Health Care: Moving from Evidence to Practice, edited by Sharon Straus, Jacqueline Tetroe and Ian D. Graham and published by Wiley-Blackwell in 2009, includes the topics addressed in this series.

REFERENCES

  1. 1.↵
    Graham ID, Logan J, Harrison MB, et al. Lost in knowledge translation: time for a map?J Contin Educ Health Prof 2006;26:13–24.
    OpenUrlCrossRefPubMed
  2. 2.↵
    Larsen J. Knowledge utilization. What is it?Knowledge: creation, diffusion, utilization 1980;1:421–42.
  3. 3.↵
    Dunn WN. Measuring use of knowledge. Knowledge: Creation, diffusion, utilization 1983;5:120–33.
  4. 4.
    Weiss CH. The many meanings of research utilization. Public Adm Rev 1979; 5:426–31.
    OpenUrl
  5. 5.
    Beyer JM, Trice HM. The utilization process: a conceptual framework and synthesis of empirical findings. Adm Sci Q 1982;27:591–622.
    OpenUrlCrossRef
  6. 6.↵
    Estabrooks CA. The conceptual structure of research utilization. Res Nurs Health 1999;22:203–16.
    OpenUrlCrossRefPubMed
  7. 7.↵
    Simon J, Gray A, Clarke P, et al. Cost effectiveness of self monitoring of blood glucose in patients with non-insulin treated type 2 diabetes: economic evaluation of data from the DiGEM trial. BMJ 2008;336:1177–80.
    OpenUrlAbstract/FREE Full Text
  8. 8.↵
    O’Kane MJ, Bunting B, Copeland M, et al. Efficacy of self monitoring of blood glucose in patients with newly diagnosed type 2 diabetes (ESMON study): randomised controlled trial. BMJ 2008;336:1177–80.
    OpenUrlAbstract/FREE Full Text
  9. 9.↵
    Estabrooks CA, Floyd J, Scott-Findlay S, et al. Individual determinants of research utilization: a systematic review. J Adv Nurs 2003;43:506–20.
    OpenUrlCrossRefPubMed
  10. 10.↵
    Stacey D, Pomey MP, O’Connor AM, et al. Adoption and sustainability of decision support for patients facing health decisions: an implementation case study in nursing. Implement Sci 2006;1:17.
    OpenUrlCrossRefPubMed
  11. 11.↵
    Stacey D, O’Connor AM, Graham ID, et al. Randomized controlled trial of the effectiveness of an intervention to implement evidence-based patient decision support in a nursing call centre. J Telemed Telecare 2006;12:410–5.
    OpenUrlAbstract/FREE Full Text
  12. 12.↵
    Grol R. Successes and failures in the implementation of evidence-based guidelines for clinical practice. Med Care 2001;39(8 Suppl 2):II46–54.
  13. 13.↵
    Hanney SR, Gonzalez-Block MA, Buxton MJ, et al. The utilization of health research in policy-making: concepts, examples and methods of assessment. Health Res Policy Syst 2002;1:2.
    OpenUrlCrossRef
  14. 14.↵
    Kitson A, Straus SE. The knowledge-to-action cycle: identifying the gaps. CMAJ 2009 Nov. 30 [Epub].
  15. 15.↵
    Hakkennes S, Green S. Measures for assessing practice change in medical practitioners. Implement Sci 2006;1:29.
    OpenUrlCrossRefPubMed
  16. 16.↵
    Ciaschini PM, Straus SE, Dolovich LR, et al. Community-based intervention to optimise falls risk management: a randomised controlled trial. Age Ageing 2009;38:724–730.
    OpenUrlAbstract/FREE Full Text
  17. 17.↵
    Brouwers M, Stacey D, O’Connor A. Knowledge creation: synthesis, tools and products. CMAJ 2009 Nov.2 [Epub].
  18. 18.↵
    Donabedian A. The quality of care. How can it be assessed?JAMA 1988;260:1743–8.
    OpenUrlCrossRefPubMed
  19. 19.↵
    Han YY, Carcillo JA, Venkataraman ST, et al. Unexpected increased mortality after implementation of a commercially sold CPOE system. Pediatrics 2005;116:1506–12.
    OpenUrlAbstract/FREE Full Text
  20. 20.↵
    Cameron C, Naylor CD. No impact from active dissemination of the Ottawa Ankle Rules: further evidence of the need for local implementation of practice guidelines. CMAJ 1999;160:1165–8.
    OpenUrlAbstract/FREE Full Text
  21. 21.↵
    Grol R, Dalhuijsen J, Thomas S, et al. Attributes of clinical guidelines that influence use of guidelines in general practice. BMJ 1998;317:858–61.
    OpenUrlAbstract/FREE Full Text
  22. 22.↵
    Van der Weijden T, Grol R, Schouten B, et al. Barriers to working according to cholesterol guidelines. Eur J Public Health 1998;8:113–8.
    OpenUrlAbstract/FREE Full Text
  23. 23.↵
    Dopson S, Locock L, Chambers D, et al. Implementation of evidence-based medicine: evaluation of the Promoting Action on Clinical Effectiveness programme. J Health Serv Res Policy 2001;6:23–31.
    OpenUrlAbstract/FREE Full Text
  24. 24.↵
    Stroke Unit Trialists’ Collaboration. Organised inpatient (stroke unit) care for stroke [review]. Cochrane Database Sys Rev 2007;(4)CD000197.
  25. 25.↵
    Craig P, Dieppe P, Macintyre S, et al. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ 2008;337:a1655.
    OpenUrlFREE Full Text
  26. 26.↵
    Langhorne P, Pollock A.; Stroke Unit Trialists Collaboration. What are the components of effective stroke unit care?Age Ageing 2002;31:365–71.
    OpenUrlAbstract/FREE Full Text
  27. 27.↵
    Weaver T, Tyrer P, Ritchie J, Renton A. Assessing the value of assertive outreach. Br J Psych 2003;183:437–45.
    OpenUrlAbstract/FREE Full Text
  28. 28.↵
    Wilson A, Parker H, Wynn A, et al. Performance of hospital at home after a randomised controlled trial. J Health Serv Res Policy 2003;8:160–4.
    OpenUrlAbstract/FREE Full Text
  29. 29.↵
    Lavis J, Davies J, Oxman A, et al. Towards systematic reviews that inform health care management and policy-making. J Health Serv Res Policy 2005;10:35–48.
    OpenUrlAbstract/FREE Full Text
  30. 30.↵
    Green LW, Glasgow RE. Evaluating the relevance, generalization and applicability of research — issues in external validation and translation methodology. Eval Health Prof 2006;29:126–53.
    OpenUrlAbstract/FREE Full Text
  31. 31.↵
    Glasgow RE, Green LW, Klesges LM, et al. External validity: we need to do more. Ann Behav Med 2006;31:105–8.
    OpenUrlCrossRefPubMed
  32. 32.↵
    Glenton C, Underland V, Kho M, et al. Summaries of findings, descriptions of interventions, and information about adverse effects would make reviews more informative. J Clin Epidemiol 2006;59:770–8.
    OpenUrlCrossRefPubMed
PreviousNext
Back to top

In this issue

Canadian Medical Association Journal: 182 (2)
CMAJ
Vol. 182, Issue 2
9 Feb 2010
  • Table of Contents
  • Index by author

Article tools

Respond to this article
Print
Download PDF
Article Alerts
To sign up for email alerts or to access your current email alerts, enter your email address below:
Email Article

Thank you for your interest in spreading the word on CMAJ.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Monitoring use of knowledge and evaluating outcomes
(Your Name) has sent you a message from CMAJ
(Your Name) thought you would like to see the CMAJ web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Monitoring use of knowledge and evaluating outcomes
Sharon E. Straus, Jacqueline Tetroe, Ian D. Graham, Merrick Zwarenstein, Onil Bhattacharyya, Sasha Shepperd
CMAJ Feb 2010, 182 (2) E94-E98; DOI: 10.1503/cmaj.081335

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
‍ Request Permissions
Share
Monitoring use of knowledge and evaluating outcomes
Sharon E. Straus, Jacqueline Tetroe, Ian D. Graham, Merrick Zwarenstein, Onil Bhattacharyya, Sasha Shepperd
CMAJ Feb 2010, 182 (2) E94-E98; DOI: 10.1503/cmaj.081335
Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like

Jump to section

  • Article
    • Monitoring use of knowledge
    • Evaluating the impact of use of knowledge
    • What are the gaps in knowledge in this area?
    • Articles to date in this series
    • Footnotes
    • REFERENCES
  • Figures & Tables
  • Responses
  • Metrics
  • PDF

Related Articles

  • No related articles found.
  • PubMed
  • Google Scholar

Cited By...

  • Does a one-day workshop improve clinical facultys comfort and behaviour in practising and teaching evidence-based medicine? A Canadian mixed methods study
  • Nosocomial Infection Reduction in VLBW Infants With a Statewide Quality-Improvement Model
  • Google Scholar

More in this TOC Section

  • Transient ischemic attack and minor stroke: diagnosis, risk stratification and management
  • Pre-exposure prophylaxis for HIV: effective and underused
  • Diagnosis and management of postural orthostatic tachycardia syndrome
Show more Review

Similar Articles

 

View Latest Classified Ads

Content

  • Current issue
  • Past issues
  • Collections
  • Sections
  • Blog
  • Podcasts
  • Alerts
  • RSS
  • Early releases

Information for

  • Advertisers
  • Authors
  • Reviewers
  • CMA Members
  • CPD credits
  • Media
  • Reprint requests
  • Subscribers

About

  • General Information
  • Journal staff
  • Editorial Board
  • Advisory Panels
  • Governance Council
  • Journal Oversight
  • Careers
  • Contact
  • Copyright and Permissions
  • Accessibiity
  • CMA Civility Standards
CMAJ Group

Copyright 2023, CMA Impact Inc. or its licensors. All rights reserved. ISSN 1488-2329 (e) 0820-3946 (p)

All editorial matter in CMAJ represents the opinions of the authors and not necessarily those of the Canadian Medical Association or its subsidiaries.

To receive any of these resources in an accessible format, please contact us at CMAJ Group, 500-1410 Blair Towers Place, Ottawa ON, K1J 9B9; p: 1-888-855-2555; e: cmajgroup@cmaj.ca

Powered by HighWire