Podcast: Cost-effectiveness analysis and the science of value
Transcript
Advertisement: This episode is brought to you by Audi Canada. The Canadian Medical Association has partnered with Audi Canada to offer CMA members preferred incentive on select vehicle models. Purchase any new qualifying Audi model and receive an additional cash incentive based on the purchase type. Details of the incentive program can be found at audiprofessional.ca. Explore the full line of vehicles available to suit your lifestyle. The Audi driving experience is like no other.
Kirsten Patrick: Cost-effectiveness and optimal use of resources in health care are very often top of mind for governments and policymakers. But many people are under the false impression that cost-effectiveness analysis is unscientific and has limited application. I'm Dr. Kirsten Patrick, deputy editor for the Canadian Medical Association Journal. Today I'm speaking with one of the authors of an analysis article published in CMAJ on the science of cost-effectiveness and value in health. Dr. Murray Krahn is a general internist at the University Health Network in Toronto, and a Canada Research Chair in Health Technology Assessment. He leads THETA, a research group devoted to health economics and technology assessment. Welcome, Murray.
Murray Krahn: Thank you.
Kirsten Patrick: So what made you and your co-authors want to write about cost-effectiveness analysis in house?
Murray Krahn: Well, first, I had recently completed a term sitting on the guidelines working group for both the Canadian guidelines in cost-effectiveness analysis, as well as the US guidelines in cost-effectiveness analysis. And in one way, we wanted to bring the results of these deliberations and recommendations to the attention of a broader audience. In particular, we had just recently completed the Canadian guidelines. And we felt that this is something that we wanted people to know about. We also felt that the fact that we were producing these guidelines meant that the discipline was maturing, I mean, the cost-effectiveness analysis is about 50 years old. And the methods have become more standardized. And as we already argue in the in the analysis piece, we think they're becoming more scientific, and therefore more useful for folks who develop clinical and health policy. Though, I guess the last thing is, the last thing that led us led me specifically to be interested in writing this is experiences trying to present cost-effectiveness in the policy domain. And, you know, when we present this information, it isn't always taken seriously, it's regarded as being relatively unscientific. And we had a recent experience in which we produced what we thought was quite a good cost-effectiveness analysis informing a broad public policy question. And that was somewhat at odds with the sort of conventional methods of evidence review. And, you know, it was this sort of contrast between one paradigm in which all the evidence is entered into a cost-effectiveness kind of model and more conventional evidence review paradigms that led me to say, maybe we need to bring this to the attention again, of a broader audience.
Kirsten Patrick: And so for lots of our listeners, they're probably scratching their heads going, I've never known exactly what cost-effectiveness analysis is. So would you like to tell us in some simple terms, what it is exactly?
Murray Krahn: Sure. So I'll first of all, talk briefly about how you do it. And then I'll try to explain, again, in simple terms of what it means. So typically, in a cost-effectiveness analysis, you're building a model, a computer model of a particular question. And the question could be, what is the right drug for a particular patient? Or it could be things like, should the should the province of Ontario or should we in Canada pay for some new fancy technology? There's usually a question. And then in that question, we have to think about the relative clinical benefits of you know, A versus B. But then we also have to think about the the resource implications of adopting A versus adopting B. And cost-effectiveness is a framework for looking at the the marginal clinical benefits of some new intervention, relative to its its cost. So it's, you know, we have many ways of thinking about what the right decision should be in health. And we often think about evidence review, if we're sort of trying to be scientific and evidence informed decision makers. We often think about clinical evidence as being sort of the foundation for decision-making and health but this is a method that brings another dimension to decision-making and health, and one that is particularly relevant to broader policy questions. And that is the resource implications of our decisions. So it's like a formal method of thinking about the resource implications of our decisions in the context of health gains. Slightly more formally, it's a way of thinking about opportunity cost. And what this means is, we do not have unlimited resources. And if we do one thing, that means we cannot do another. And so if we adopt the latest fancy technology, that means that quite often we can't do something else. And, you know, it may be that housekeeping at the hospital gets less money, it may be that homecare gets less money, it may be that long-term care beds get less money, but somebody is going to get less resource if we impose a new cost on the health system. And, and cost-effectiveness analysis is a way of trying to formally understand what the opportunity costs of health decisions are.
Kirsten Patrick: I think most people are not aware of how person focused that is, because in order to inform how patients value things, you actually ask people and, and, and your estimates are based on what people say they value. Is that all right?
Murray Krahn: It is. I mean, there's, you know, there's two parts of cost-effectiveness analysis, it's like, you know, the difference in cost and then there's difference in health outcomes. And the difference in costs, typically, we think about costs that are borne by public payers, of course, in the Canadian health system, you know, about a third of costs are actually borne by people other than public health systems. But we typically do this to inform health systems. The other part of this is thinking about health. And in cost-effectiveness analysis, typically, we decompose health into two things. One is how long you live, which most people think is pretty important. And then the other piece is your quality of life. And so so this is a critical piece of thinking about how we value health, we think about longevity, but we also think about quality of life in a particular way. We assign what's called health state utility values to all forms of health, ranging from, from health states that are worse than dead. And there are there are some things that most people think are worse than than death, and all the way up to a full health, which is assigned a value of one. So it's, you know, this method is not perfect. It's not, it doesn't cover every contingency. And I mean, health is a very complex, multifaceted domain that, you know, no one would believe can be fully reduced to like a zero to one continuous scale. But in cost-effectiveness analysis, we do that. And we think that we capture many of the important dimensions of health, although not all. So yes, it includes it's not just a mechanical calculation. And it's not just about cost, it is about cost in the context of health outcomes and health outcomes expressed in a way that are related to patients and social values.
Kirsten Patrick: Initially, in our conversation, Murray, you suggested that historically, cost-effectiveness has had a reputation amongst some of being less scientific, kind of being a lower quality of evidence than clinical effectiveness evidence. What are some common criticisms that people have of cost-effectiveness science?
Murray Krahn: Well, these I think sort of broad fall broadly into two categories. One is, one is sort of scientific and the other I would say as broadly speaking, ethical. The scientific criticisms largely come from people who are trained in epidemiologic methods or clinical epidemiology, I mean, the process of constructing a cost-effectiveness model requires integration of like a very large amount, and a very heterogeneous types of data. You have to take data about clinical effectiveness from randomized trials or meta analyses, you might take data from natural about natural history from large scale cohort studies, which which may come from a completely different jurisdiction. You may take data about costs from randomized trials or special studies of costs or, or population based studies of cost, but they again, maybe unrelated to the other types of data, and your data about health related quality of life may come from different sources again. So these data are different in terms of their geographical location, the study design that produces them, they're different in terms of quality and for many people trained in epidemiologic methods, this is like this is kind of tough to accept because the data are quite variable. The other component of this is that relative to other types of clinical research, cost-effectiveness models, typically are maybe more susceptible to differences and analysts judgment. I mean, somebody has to make a cost-effectiveness model. And that means they make this whole series of choices along the way. And they make choices about the structure of the model, the type of the data, they're going to put in the model, the type of analysis, they're going to do. The time horizon, there's there's a whole set of analyst choices, and that has led some people to think about cost-effectiveness analysis as being inherently more variable, and also potentially more subject to bias, given the cost-effectiveness analyses are often done for those who have, obviously something to gain, they're often done for cost-effective for reimbursement decisions, for new drugs or new devices. And, and obviously, those those folks who are commissioning cost-effectiveness analysis in that context, stand stand potentially to gain or lose. So given the variability there, there has been concern about potential bias, that I would say those are the main concerns about the scientific quality. The other set of concerns has to do with the ethical considerations. And there has been an enormous literature on the ethical problems of the quality adjusted life here, which is sort of one of the foundations of cost-effectiveness analysis. You know, their quality-adjusted life years essentially assign value to individuals, health, and therefore, their lives based on their remaining life expectancy and their potential to improve their health. So, so inherently, there is a greater weight towards children and young people than old people, because children have many years of life to gain. So if you save the life of a young person, you know, inherently there's going to be more gain. And that kind of rubs some people the wrong way, who would like to think of all lives as having equal value, irrespective of age. There have been also ethical criticisms on from the disability community that have said, there are inbuilt biases against patients who are disabled or have ongoing disabilities, because the potential for improvement is limited given given preexisting disabilities. So the you know, those are, those are some of the standard criticisms there. I mean, there are there are many others. I think most of these criticisms, I would say, particularly the ethical ones, are leveled under the what I would call the misapprehension that cost-effectiveness is the normative solution for decision-making and health. And what I mean by that is that people think that cost-effectiveness analysis is going to be making the decision or is the primary consideration in decision-making. And as I as I try to argue towards the end of the analysis piece, that just isn't so. I mean, in in the Canadian context, we don't generally regard cost-effectiveness analysis useful, though it is, as the as the final arbiter of how health decisions should be made. We think about the clinical evidence, we think about efficiency, and that's what cost-effectiveness tells us about it tells us whether our resources on balance are being used wisely. But we don't, in general, although there are some health economists that do think this way, and there are some jurisdictions internationally that that sort of do think this way. But we in Canada generally do not think of cost-effectiveness analysis as being the integrative method that takes all factors into account and tells us what to do and decision-making. We we think about clinical evidence, we think about efficiency, we broadly speaking, think about the ethical and distributional considerations involved in health decisions. And then we also think about legal implementation issues, you know, a variety of other advice, sometimes environmental factors. So, you know, there there are a range of considerations that fall outside of cost-effectiveness analysis that cost-effectiveness analysis doesn't particularly handle particularly well. And and it's you know, it's often folks who think that people are using cost-effectiveness analysis alone as sort of the normative solution that get exercised about the way in which health in particular is represented in cost-effectiveness analysis.
Kirsten Patrick: So that's the misconception of the kind of rationing panel?
Murray Krahn: I think so yes.
Kirsten Patrick: I want to come back to the scientific piece. And when you were talking about common criticisms that from epidemiologists say for example. I mean, we were all taught to conduct and report research in a, in a particular way in a standard format. And then cost-effectiveness analysis is now comparing apples, oranges and pears, as I think an earlier version of your, of your analysis article might have said. And so that seems a little bit, well, we can't be comparing these things. And as a medical editor, looking at a cost-effectiveness analysis, we always use the term the black box and have the author's explained every single assumption that they have made. So now you're talking in your article saying that we should embrace cost-effectiveness analysis as the science of value in health care? What do you mean, when you say you think it has become more scientific?
Murray Krahn: We've just been doing this for a long time now. So you know that the fact that there are guidelines now means that there are expectations. I mean, when I started my career in the early 1990s, I mean, there, there was a lot, there was a lot of freedom slash variability in terms of how to actually do these things. I mean, there was no, there was no agreement on what the time horizon for analysis should be, or what the discount rate should be, or how to think about competitors, or what the perspective should be. These were sort of choices that were left up to the analyst. And the first, well, I wouldn't say the first set of guidelines, you know, the probably the most influential early set of guidelines was that put forth in 1996, by the US Preventive Services panel, which was a task force on cost-effectiveness analysis. I don't know if I've got that name exactly, right. But it was the US guidelines, sometimes called the gold book, that that created the idea of the reference case. It said, you know, we are not going to resolve all of the theoretical underpinnings of cost-effectiveness analysis, because they are many, there are many and complex considerations that go into this framework. But we are going to arrive at some agreement about how to do these things in a standardized way. Because comparability is really important for, you know, validity, for comparability for transferability, across jurisdictions. So the idea of the reference case was first introduced in 1996. And that idea has spread. I mean, there are guidelines around the world as to how to do cost-effectiveness analysis. And they are to some extent jurisdiction specific, because there are both local and rather universal scientific considerations in thinking about cost-effectiveness. But there there is, there is there is much more harmonization in methods. So the Canadian guidelines, for example, now say that you should, in general adopt lifetime time horizon. That means that, you know, if you've got a trial that goes out to two years, and you show a survival difference, it's not okay to just build a model that goes up to two years, you have to project those survival gains over the remaining lifetime of the patients in the trial. You have to use a discount rate, which has recently been changed to 1.5% for both costs and benefits. And here, the idea is that both costs and health benefits that accrue in the future, have less value, when regarded from the perspective of the present than things that are that are occurring closer to us. You know, we in general recommend in Canada that the analysis be carried out from a public payer perspective. And what we generally mean by that is that we are mostly interested in the costs that are borne by our publicly funded health systems. We are interested in cost the patient's pay out-of-pocket, and costs that insurers pay, but primarily in Canada, we're interested in costs borne by public health systems. So there is a lot of a lot more agreement on how to do it. But also, our methods in some ways have gotten better. Like for example, when I started doing this, it was not a standard thing at all to validate the projections of your model. You would produce a model to the best of your ability, and then just put it out there. Now, the expectation is that whatever your model predicts, you should be comparing that to an independent set of data, other projections of models and that validation exercise should be reported as part of the economic evaluation. There are groups around the world now that are that are not just doing one off models, but that are building policy models. So CISNET in the United States has groups of modelers working on different types of cancers. So for example, in prostate cancer, I think there's five separate US funded groups that are building models of the natural history of prostate cancer. And they come at this in slightly different ways. There's different conceptual foundations. But the goal is to work over many years to develop robust, comparable models to really improve the transparency, reproducibility, reliability of these model-based estimates. So I think there is a there is a move in, there have been improved methods and a move towards really trying to set the projections of these models on a more scientific foundation. We've had much better data. Now we have standardized instruments for health-related quality of life measurement. In in economic evaluation, people are now using instruments like the EQ-5D, and there's a lot of experience with those instruments, we know the operating characteristics, we know the good things that they can do, we know their limitations, we know what they generally leave out. And increasingly, we are using very large population data sets. You know, in Canada, we are enormously wealthy in terms of our links datasets at the provincial level. And these allow us to see pretty much anytime a patient touches the health system. And we are increasingly using those data sets to provide really scientifically credible estimates of resource use over time, which is something we never never had before. The costing parts of these models have often been quite weak. And, you know, there's there's been a lot of scientific activity in the last 10 years to figure out ways of using these linked administrative data to improve the scientific credibility foundation of the resource use parts of these models. So I think, you know, inherently, this is a more complex set of methods than almost any other in health research. And inherently, will always probably be somewhat more variable than simpler methods like randomized trials or observational studies, but inherently also potentially more powerful, because it integrates a lot of policy relevant data in a common framework. So I would argue, and I think most people in this field would agree that this whole field in general has moved in the direction of becoming more scientific over the last decade or two in particular.
Kirsten Patrick: Yeah, it certainly sounds like it's a fast developing field, enabled, as you seem to say, by big data, and more widely available data and data sharing. So I mean, I'm curious, are there plans or is there currently any activity to use AI in cost-effectiveness analysis?
Murray Krahn: You know, there, there probably is, I can tell you what we are doing. You know, one, one of the challenges that we have in using big data for health economic models is that big data, typically is not, at least in the Canadian context, super high fidelity data. So we can see, broadly speaking, when patients die, when they get rehospitalized, and so on. We often don't have a lot of granular clinical detail, because the clinical data sets live elsewhere. And other you know, in the Canadian context, usually not that well integrated with provincial administrative claims. So one of the potential applications is, is training, you know, using AI to try to figure out where, where patients are. So you know, these health economic models typically model individual patient trajectories through a variety of what are called health states. And the challenge in big data is you often don't know where the patients are, like, you know, what happens to them ultimately, but kind of track their progress through a variety of health data is often a bit difficult. And so we are exploring methods to use machine learning methods in assigning patients to specific health gates. I mean, there are probably many other applications, that's that's the only one that I'm aware of at the moment.
Kirsten Patrick: So it seems like your article is sort of a case to convince skeptics that economic analysis is is real and happening and going to make a difference in in health care and is more scientific than it used to be. How would you like to see cost-effectiveness analysis embrace as the science of value and positively affect decision-making in our health care system in future?
Murray Krahn: Well, first of all, I'll tell you where I think it is used. And then I'll tell you where I think it, it could potentially see broader application. I mean, we, you know, cost-effectiveness since about 1992 has been used in in Canada to inform decisions about drug reimbursement. So we were one of the earliest jurisdictions internationally to adopt this. So we, we have been using this to make decisions about which drugs get on provincial drug formularies, for a fair amount of time. And over time, we've actually started to take this stuff quite seriously. It's also being used at provincial and national levels, to inform decisions about non-drug technologies, so devices and IT platforms and that kind of thing. It's a little bit different, more difficult for non-drug interventions, because there isn't a single purchaser or the purchasers are distributed quite widely across the health system. But you know, drugs and devices are, you know, a very small fraction represent a relatively small fraction of the overall health spend in Canada. Most of the money we spend has to do with making routine clinical decisions about health care, deciding when to hospitalize patients, when to send them home, when to send them to the ICU, when to send them to long-term care. You know, paying for things like routine blood tests, and, you know, we spend a lot of money in Canada doing things that sort of fall outside the remit of any kind of cost-effectiveness evaluation. So, I mean, my, my sense is that we could improve the scientific quality of decision-making in a couple of areas. One is with clinical practice guidelines. Clinical practice, guidelines, historically have been rooted in evidence-based medicine. And they have, you know, primarily been driven by concerns about the benefits and potential harms of therapy, which, which is probably where, what they should be doing. But with the possible exception of the UK, in, in most places, clinical practice guidelines are agnostic about the resource implications of the health, the health interventions that they are recommending. So, you know, there there's no consideration, like their clinical practice guideline recommend some expensive new drug, there's no way except sort of the informed informal judgment of the people putting the guidelines together to, to actually think about that in a in a serious way. Some of the some of the guideline methodologies like grade, for example, do have have a very limited consideration of costs, but they don't consider costs in relation to benefits. And it's very much a post hoc, second order kind of consideration and cost-effectiveness analysis is, is quite a powerful, and as I've argued, increasingly well established method for thinking about benefits and, and costs at the same time. I think the clinical practice guideline developers are, are gradually opening the door to consider broader classes of evidence. Now people are thinking about patient preferences and patient values and patient experiences as being important to inform clinical practice guidelines. I think that's largely a good adaptive thing. And I think thinking about resources is probably the next frontier for clinical practice guidelines. I mean, there are challenges there too, like these, you know, to do a cost-effectiveness analysis is, is quite resource intensive and clinical practice guidelines often consider many decisions within them. So not totally straightforward. Where would the resources come for this? You know, that those are those are the kinds of questions one might ask, but, you know, resource, the idea that clinical practice guidelines should not consider resources at all like I that seems to me, a bit difficult to defend, since a lot of clinical practice is, is actually driven these days by clinical practice guidelines. So I see that as the next frontier. I mean, public health is obviously another area of potential growth. And I think, you know, in the US, the CDC is one of the few American public institutions that actually does consider cost-effectiveness analysis. Here in Canada, public health agencies are increasingly using cost-effectiveness analysis to inform decisions about vaccine recommendations. I think that that could and should grow. I just see this as a as a useful way of broadening the scope of policy level decision-making in areas where resource implications are quite important.
Kirsten Patrick: I'm curious, Murray, how is your team comprised when you work on a big cost-effectiveness analysis? Do you work alongside epidemiologists and policymakers to make these decisions all together the way that you're describing? Or is it probably not really there yet?
Murray Krahn: Well, it totally depends on who is doing the analysis. and for what purpose. So for example, cost-effectiveness analysis that would be done by CADTH or OHTAC, usually, there would be a health economist and an epidemiologist assigned to the same question. And the process of evidence review and the construction of a health economic model would sort of proceed in parallel. And the teams would would sort of ideally talk to each other about the assumptions they're making and how the evidence is being generated. Because, you know, the, ultimately, the evidence that's generated by the epidemiologist has to go into the cost-effectiveness model. So and then, you know, the other component of this is that you, it's essential to have clinical expertise, to look at the way in which you've structured the problem, first of all, to make sure that you've got the problem, right, to look at the model to make sure that the way in which you are conceptualizing the natural history of disease is correct. So you need ideally a group of experts, both from on the policy side to make sure that you're understanding the policy question correctly. But also on the clinical side to make sure that you are you have a good and correct understanding of the disease, and that your model is faithfully reproducing how experts think about the disease. So you need all of those things. You need policy input, you need clinical expertise, you need someone who knows how to build a model, and you need an epidemiologist to review. And then, you know, if you do a really super fancy model, you might need some biostatisticians, as well. Or you may, depending on the extent to which you use administrative data, you may have some additional analyses, work with some health services researchers to produce an output that also might inform your model but the standard plain vanilla kind of cost-effectiveness analysis that would be done by a health technology assessment agency in Canada typically would, would involve those things. They would involve an epidemiologist and economist and then a variety of expert policy and clinical advisors.
Kirsten Patrick: Thank you for this fascinating discussion. I've really enjoyed talking to you today.
Murray Krahn: Very welcome. It's been pleasure.
Kirsten Patrick: I've been speaking with Dr. Murray Krahn, the general internist at the University Health Network in Toronto and the Canada Research Chair in in Health Technology Assessment. He leads THETA, a research group devoted to health economics and technology assessment. To read the analysis article he co-authored, visit cmaj.ca. Also, don't forget to subscribe to CMAJ podcasts on Soundcloud or a podcast app and let us know how we're doing by leaving a rating. I'm Dr. Kirsten Patrick, deputy editor of the CMAJ. Thank you for listening.