Monitoring use of knowledge
In the knowledge-to-action cycle, after the intervention related to knowledge translation has been implemented, uptake of knowledge should be monitored. 1 This step is necessary to determine how and to what extent the knowledge is used by the decision-makers. 1 How we measure uptake of knowledge depends on our definitions of knowledge and use of knowledge and on the perspective of the user of knowledge. In this paper, we discuss approaches to monitoring use of knowledge and evaluating its impact, based on a systematic review of the literature.
Several classifications exist for use of knowledge. 2–6 We find it useful to consider conceptual, instrumental and persuasive use of knowledge. 1 Conceptual use of knowledge implies changes in knowledge, understanding or attitudes. Research could change thinking and inform decision-making but not change practice. For example, based on knowledge that self-monitoring of blood glucose in newly diagnosed patients with type 2 diabetes mellitus is not cost-effective and is associated with lower quality of life, 7,8 we understand a newly diagnosed patient’s concern about self-monitoring.
Instrumental use of knowledge is the concrete application of knowledge and describes changes in behaviour or practice. 1 Knowledge can be translated into a usable form, such as a pathway for care, and is used in making a specific decision. For example, we could measure how often a clinician orders prophylaxis for deep venous thrombosis in appropriate patients admitted to the intensive care unit.
Persuasive use of knowledge is also called strategic or symbolic use of knowledge and refers to research being used as a political or persuasive tool. It relates to the use of knowledge to attain specific power or profit (i.e., knowledge as ammunition). 1 For example, we use our knowledge of adverse events associated with use of mechanical restraints on agitated inpatients to persuade the nursing manager on the medical ward to develop a ward protocol about their use.
How can use of knowledge be measured?
Many tools exist for assessing use of knowledge. Dunn 3 completed an inventory of tools for conducting research on use of knowledge and identified 65 strategies, but most have unknown validity or reliability. Most frequently, tools for the utilization of knowledge measure instrumental use of knowledge. 9 Often these measures rely on self-report and are subject to recall bias. For example, a case study described adoption by call centre nurses of a protocol for decision-making support. 10 Eleven of 25 nurses who were surveyed said they used the tool in practice. Potential limitations to this study include recall bias and a short period of follow-up (i.e., one month) without repeated observation. 10 In a more valid assessment of instrumental use of knowledge, participants underwent a quality-based assessment of their coaching skills during simulated calls to determine how often the protocol for decision-making support was used. 11
Assessing instrumental use of knowledge can also be done by measuring adherence to recommendations or quality indicators. Grol 12 completed a series of studies involving family physicians in the Netherlands who recorded their adherence to 30 national guidelines. Three hundred forty-two indicators of adherence were constructed and physicians received educational sessions on how to record their performance on these indicators. Computer software was developed to relate performance to clinical conditions to assess adherence. 12 More simply, we could look at how often we prescribe β-blockers in appropriate patients with heart failure through a chart-based audit.
We also need to consider who the targets are for use of knowledge (i.e., the public, health care professionals, policy-makers), because they may require different strategies for monitoring use of knowledge. Assessing use of knowledge by policy-makers may require strategies such as interviews and analysis of documents (e.g., reviewing policies to assess use of evidence). 13 When assessing use of knowledge by physicians, we could measure use of paths of care or ordering of relevant medications, which are often measured through use of administrative or clinical databases. Also, when measuring use of knowledge by the public, we could measure attitudes of patients through surveys or use of resources through administrative databases.
What is the target level of use of knowledge that we are aiming for? This target is based on discussions with stakeholders and includes consideration of what is acceptable and feasible and whether a ceiling effect may exist. 14 If the degree of use of knowledge is found to be adequate, strategies for monitoring sustained use of knowledge should be considered. If the degree of use of knowledge is less than expected or desired, reassessment of barriers to uptake may be necessary.
When should we measure use of knowledge versus the impact of use of knowledge? If the implementation-related intervention targets a behaviour for which a strong evidence of benefit exists, measuring the impact of the intervention in terms of whether the behaviour has occurred, rather than whether a change in clinical outcomes has occurred, may be appropriate. 15 A strategy to implement the guidelines of Osteoporosis Canada in a community setting was recently studied. 16 The primary outcome of this randomized trial was appropriate use of medications for osteoporosis (i.e., instrumental knowledge) rather than fractures in patients (i.e., clinical outcome). The researchers felt that, because sufficient evidence exists to support use of medication for osteoporosis to prevent fragility fractures, they did not need to measure fractures as the primary outcome. In such instances, measurement of outcomes at the patient level could be prohibitively expensive, but failure to measure at the patient level does not address whether the intervention improves relevant clinical outcomes.
Evaluating the impact of use of knowledge
The next phase of the knowledge-to-action cycle is to determine the impact of use of knowledge on outcomes specific to health, provider and system. 1 Although assessing use of knowledge is important, its use is of particular interest if it influences important clinical tools of measure such as quality indicators.
Evaluation should start with formulating the question. We find using the PICO framework 17 to be useful. Using this framework, the “P” refers to the population of interest, which could be the public, health care providers or policy-makers. The “I” refers to the intervention that was implemented and that might be compared with another group (i.e., “C”). The “O” refers to the outcome of interest, which could refer to health-related, provider-related or organizational outcomes.
The above strategies for considering use of knowledge can be used to frame outcomes. Donabedian 18 proposed a framework for considering quality of care that separates quality into structure (i.e., the characteristics of the setting that have an impact on care), process (i.e., the action that is done to the patient) and outcome (i.e., the status of the patient after the care-related intervention). A framework for differentiating use of knowledge from outcomes is provided in Table 1. 18 Structural indicators focus on organizational aspects of provision of service, which could be analogous to instrumental use of knowledge. Process-related indicators focus on care delivered to patients and include instances when evidence is communicated to patients and caregivers (i.e., instrumental knowledge).
Table 1: Measures and impact of use of knowledge
Outcome-related indicators refer to the ultimate goal of care, such as the quality of life of patients or admission to hospital. An example is the issue of prophylaxis for deep venous thrombosis in patients admitted to the intensive care unit. Structural measures include the availability of prophylaxis for deep venous thrombosis (e.g., low-molecular-weight heparin and intermittent pneumatic compression) at the institution (i.e., instrumental use of knowledge). Process-related measures include whether prophylaxis for deep venous thrombosis, such as low-molecular-weight heparin, is prescribed in appropriate patients in the intensive care unit (i.e., instrumental use of knowledge). Outcome-related measures include the proportion of patients in the intensive care unit who develop a deep venous thrombosis.
Implementation of interventions designed to improve predetermined outcomes may also have unintended consequences (i.e., impacts that were not anticipated). Therefore, monitoring outcomes over the long term is wise. For example, implementation of computerized systems for entry of orders by prescribers has been found to be associated with adverse events as well as to reduce errors related to medication. 19
Methods for evaluating interventions
The question should be matched to the appropriate study design. When developing an evaluation, we need to consider rigour and feasibility. By rigour, we mean that the strategy for evaluation should use explicit and valid methods. Both qualitative and quantitative methods could be used. By feasible, we mean that the strategy for evaluation should be realistic and appropriate given the setting.
Selection of a strategy for evaluation also depends on whether we want to enhance local knowledge or provide generalizable information on the validity of the intervention related to knowledge translation. Those interested in local applicability of knowledge (i.e., whether an intervention worked or not in the context in which it was implemented) should use the most rigorous study designs feasible. These designs may include observational evaluations, in which the researcher does not have control over allocation of study participants to the intervention or a comparable control. Those interested in generalizable knowledge (i.e., whether an intervention is likely to work in comparable settings) should use the most rigorous design for evaluation-specific research that they can afford, such as randomized trials or experimental evaluation. A third form of evaluation to consider is process-related evaluation. This form of evaluation may involve determining the extent to which decision-makers were exposed to the intervention. Additionally, it may include a description of the experience of those exposed to the intervention and potential barriers to the intervention.
For example, a study evaluating the effectiveness of an educational intervention on the use of radiography for diagnosis of acute ankle injuries showed that the dissemination of the Ottawa ankle rules had no impact. However, less than a third of those receiving the intervention were actually physicians who had authority to order x-rays. This fact raises questions about whether the intervention was not effective or simply not directed to the appropriate decision-makers. 20 This type of evaluation is also useful because it allows corrections to be made to the intervention.
Qualitative methods of evaluation can be helpful in exploring the “active ingredients” of an intervention related to knowledge translation and thus they are particularly useful in process-specific evaluation. In a randomized trial of a comprehensive, multifaceted strategy for implementation of guidelines for family physicians, no changes in testing of cholesterol were noted after a one-year intervention. 21 This finding led to interviews with family physicians who expressed concern about the extra workload associated with implementation of the guidelines and suggested revisions to the diagnostic algorithm. 22
Quantitative evaluation methods included randomized and quasi-experimental studies. Randomized trials are more logistically demanding but provide more reliable results than non-randomized studies. Nonrandomized studies often can be implemented more easily and are appropriate when randomization is not possible.
Framework for evaluating complex interventions
Mixed methods can be used to evaluate complex interventions. To some extent, all interventions can be seen as complex. The relatively simple act of prescribing a pill is accompanied by a series of steps to ensure adherence and check for adverse effects and drug interactions. The key active ingredient, the pill, is readily identified. For more complex interventions, identifying the precise mechanism that may contribute to outcome is difficult because these interventions contain a number of different elements that act independently or interdependently. 23 An example is systems of care to optimize health outcomes for patients recovering from a stroke. Stroke units, compared with less organized forms of inpatient care, are effective in improving the survival of patients who have had a stroke and reducing their level of dependency. 24 The elements of a stroke unit that are associated with a beneficial outcome are not obvious from the trials included in this systematic review.
Recently, complex interventions have been a focus of debate because evidence has shown a beneficial effect for some complex interventions and not others. This discrepancy has led decision-makers to question which elements of an intervention are essential, and whether, when a trial has shown no effect, the cause is related to problems with the design or conduct of the study. One of the most influential initiatives to address this challenge is the Medical Research Council framework for the evaluation of complex interventions. 25 This framework provides researchers with an iterative, step-wise approach to evaluating a complex intervention.
The first step in this framework is defining the intervention, which involves identifying the existing evidence and any theoretical basis for the intervention so that the components of the intervention can be described. The second step is an exploratory phase in which the acceptability and feasibility of delivering the intervention and the comparison intervention are assessed and the study design is piloted. The third step is an explanatory phase, during which the final design of the trial is implemented in a relevant setting with appropriate criteria for eligibility, taking into account statistical power and relevant measures of outcome. Finally, the fourth step is a pragmatic phase in which the implementation of the intervention is examined with attention to the fidelity of the intervention, participants eligible for the intervention and any possible adverse effects. 23
Knowledge translation, complex interventions and the iterative loop
The framework of the Medical Research Council can be used to facilitate the translation of evidence by providing a mechanism for integrating additional forms of evidence relevant to decision-makers, such as qualitative or survey-derived data. In a survey of trialists contributing data to the systematic review of stroke units, 25 stroke units appeared to act as a focal point for the organization and coordination of services rather than a centre for intensive rehabilitation. A common feature of stroke units in the survey was that care was organized and coordinated by a multidisciplinary team of staff who were interested or knowledgeable about stroke. The stroke units also encouraged the involvement of caregivers. 26
A qualitative study 27 was conducted in parallel with a trial of intensive case management for people with severe mental illness. The study investigated the active ingredients of the intervention with attention to the roles of staff, practices and organizational features. Providing a comprehensive assessment and needs-led service were regarded as the key mechanisms of this intervention. Organizational features, such as an absence of team-management, limited the extent to which case managers could make an impact. Finally, the degree to which an intervention has been sustained outside the trial can be explored, such as by assessing the volume and type of patients using an admission-avoidance hospital-at-home program after the completion of a randomized trial. 28
At each phase of research on interventions for knowledge translation, input should be obtained from policy-makers, clinicians and managers in health care. Involving decision-makers in shaping the question and defining the intervention can help to ensure the relevance of research. Input from decision-makers has the potential to strengthen the generalizability of the research. Local applicability is a key factor influencing the use of evidence, and identifying the variables that define the context of the findings of research can help decision-makers address this factor. 29
The importance of the generalizability of complex interventions has recently received attention, with the development of standards to improve the quality and relevance of research. 30,31 These standards focus on the contextual variables affecting the delivery of an intervention. The link between knowledge translation and generalizability should be further explored to ensure that attributes identified as important by decision-makers in health care are considered by researchers. These factors include data on accessibility, the risk of adverse events, 32 cost-effectiveness and the sustainability of interventions. Relatively little attention has been paid to the sustainability of interventions in contrast with the initial implementation of a strategy for knowledge translation.
What are the gaps in knowledge in this area?
Several areas for potential research exist, including the development and evaluation of tools for measuring use of knowledge outside of instrumental use of knowledge. Enhanced methods for exploring and assessing sustained use of knowledge should also be developed.
-
Use of knowledge can be instrumental (i.e., concrete application), conceptual (i.e., changes in understanding or attitude) or persuasive (i.e., as ammunition).
-
Although use of knowledge is important, the impact of its use on outcomes related to patients, providers and systems is of greatest interest.
-
Strategies for evaluating implementation of knowledge should use explicit and rigorous methods and consider both qualitative and quantitative methodologies.
Key points
Articles to date in this series
-
Straus SE, Tetroe J, Graham ID. Defining knowledge translation. www.cmaj.ca/cgi/doi/10.1503/cmaj.081229
-
Brouwers M, Stacey D, O’Connor A. Knowledge creation: synthesis, tools and products. www.cmaj.ca/cgi/doi/10.1503/cmaj.081230
-
Kitson A, Straus SE. The knowledge-to-action cycle: identifying the gaps. www.cmaj.ca/cgi/doi/10.1503/cmaj.081231
-
Harrison MB, Légaré F. Adapting clinical practice guidelines to local context and assessing barriers to their use. www.cmaj.ca/cgi/doi/10.1503/cmaj.081232
-
Wensing M, Bosch M, Grol R. Developing and selecting interventions for translating knowledge to action. www.cmaj.ca/cgi/doi/10.1503/cmaj.081233
-
Davis D, Davis N. Selecting educational interventions for knowledge translation. www.cmaj.ca/cgi/doi/10.1503/cmaj.081335
Footnotes
-
This article has been peer reviewed.
Competing interests: Sharon Straus is an associate editor for ACP Journal Club and Evidence-Based Medicine and is on the advisory board of BMJ Group. None declared for Jacqueline Tetroe, Ian Graham, Merrick Zwarenstein, Onil Bhattacharyya or Sasha Shepperd.
Sharon Straus is section editor of Reviews at CMAJ and was not involved in the editorial decision-making process for this article.
Contributors: All of the authors were involved in the development of the concepts in the manuscript and the drafting of the manuscript, and all of them approved the final version submitted for publication.
The book Knowledge Translation in Health Care: Moving from Evidence to Practice, edited by Sharon Straus, Jacqueline Tetroe and Ian D. Graham and published by Wiley-Blackwell in 2009, includes the topics addressed in this series.