With accountability having become the latest bureaucratic and political buzzword in Ottawa, research agencies are under increasing pressure to develop measures to ensure that tax dollars are spent wisely, while simultaneously demonstrating the value of a tax dollar invested in research, or in one specific discipline as opposed to another.
Are there more dividends in investing in home care research or directly in home care?
Is a tax dollar invested in health research more likely to yield economic and social dividends than a dollar invested in astronomy? Or even prove more costly to government coffers because research is a “cost-driver” of the health care system?
How should a department or agency determine priority areas of research investment?
The answers are as complex as the questions, the Canadian Academy of Health Sciences said while releasing the findings of its first major study since being founded in 2004 as a nonprofit organization providing independent advice to governments and interested parties. The $500 000 study was sponsored by 23 organizations.
The study's final report, Making an Impact: A Preferred Framework and Indicators to Measure Returns on Investment in Health Research, urges that governments, industry and national organizations adopt a new framework for evaluating and prioritizing research investments (www.cahs-acss.ca/e/pdfs/ROI_FullReport.pdf).
The framework uses a “systems approach” to measure return on investment and is based on a “payback model,” developed by Martin Buxton and the Health Economics Research Group at Brunel University in the United Kingdom, under which research investments or programs are evaluated in 5 categories: “knowledge, benefits to future research, political and administrative benefits, health sector benefits, and broader economic benefits.”
The framework was crafted by an international panel chaired by Dr. Cyril Frank, chief of the division of orthopaedics at the University of Calgary in Calgary, Alberta.
It essentially proposes a sort of pick-and-choose approach in which decision-makers select indicators (from a menu of 66) in response to their specific inquiries about the circumstances or types of projects that would yield the greatest return. Some of the indicators are quantitative. They range from citation impact and mortality rates to patient satisfaction, and even more subjectively, such concepts as “happiness” and “loneliness.”
In essence, the framework could serve as a sort of aid to prioritization or a means of quantifying health outcomes relative to dollars invested. But arguably, it is so versatile that, depending on the indicators chosen, it could also be used to prove or disprove virtually any position. As a consequence, comparability between different evaluations could be problematic because of the built-in flexibility in the selection of indicators.
But Frank, Canadian Academy of Health Sciences President Dr. Martin Schechter, past-president Dr. Paul Armstrong and the report itself argue that the comparability problems and flexibility of the evaluation framework point to the need for health-research funders and decision-makers to begin collaborating on standardization of the nomenclature, methodologies, data collection and indicators.
“Canada should immediately initiate a national collaborative effort to begin to measure the impacts of Canadian health research,” the report stated, proposing that government and other organizations fund the creation of a “national council to lead strategic planning and execution of the framework, with a formal secretariat and commissioned data collectors to begin this work.”
Comparability could be a problem without a consensus on the methodologies and indicators by which return on investment should be measured, Frank told reporters at a Jan. 21 press conference. “If everybody picks different questions and different indicators, you would have a hodgepodge of potential answers.”