Intended for healthcare professionals

Editorials

Industry sponsored bias in cost effectiveness analyses

BMJ 2010; 341 doi: https://doi.org/10.1136/bmj.c5350 (Published 13 October 2010) Cite this as: BMJ 2010;341:c5350
  1. Ava John-Baptiste, fellow1,
  2. Chaim Bell, clinician scientist2
  1. 1Women’s College Research Institute, Women’s College Hospital, Toronto, Canada, Toronto, ON, Canada
  2. 2St Michael’s Hospital, Toronto, ON, Canada M5B 1W8
  1. bellc{at}smh.ca

Evidence is growing that the involvement of industry in cost effectiveness analyses can affect the findings. A systematic review of published cost-utility analyses found that industry funded studies were more than twice as likely to report a cost-utility ratio below $20 000 (£12 700; $14 850) per quality adjusted life year (QALY) as studies sponsored by non-industry sources.1 The National Institute for Health and Clinical Excellence (NICE) in the United Kingdom found that cost effectiveness analyses submitted by manufacturers produced significantly lower ratios than those derived by assessors at academic centres.2

A study recently published in the International Journal of Technology Assessment in Health Care build on these findings.3 Garattini and colleagues assessed the relation between industry sponsorship and the findings of pharmacoeconomic analyses performed on single drug treatments from 2004 to 2009. About 95% of 138 analyses sponsored by industry had favourable results compared with only 50% of those without industry sponsorship. Favourable findings were more likely even if the author was affiliated with any consultancy. Despite a lack of clarity regarding some of the investigators’ methodological choices—for example, it was unclear how the investigators ascertained author affiliation with a consultancy—and a limited discussion of the potential explanations for their findings, their results agree with other evidence indicating that the results of industry sponsored pharmacoeconomic analyses should be interpreted with caution.

Although the overall findings are not surprising, they do not imply nefarious practices on the part of industry. Cost effectiveness analysis plays a strategic role early in the product development cycle by alerting companies to economically unattractive compounds and focusing investment on potentially cost effective products.4 5 Therefore, favourable findings from industry sponsored analyses may result, at least in part, from internal selection of products with favourable cost effectiveness profiles. Also, early economic analyses often inform the design of clinical trials, and subsequent published analyses may have favourable findings because the choice of comparator was optimised.5 6 It could be argued that these analytical strategies are pragmatic responses to incentives for producing drugs with favourable cost effectiveness profiles.

Many government agencies that review economic evidence in making decisions about listing new drugs on public formularies and in setting reimbursement policies are already sceptical of industry sponsored studies. Organisations in Canada and the UK commission independent expert review of industry sponsored analyses and require that manufacturers submit the actual models for investigation. NICE commissions an additional assessment from independent academic affiliates. Most people agree that the NICE model is optimal. However, jurisdictions with limited resources find it difficult to establish and maintain high level independent review. For this group, the implications of bias in industry sponsored analyses are more important because policy makers may rely on the published literature or analyses submitted by the manufacturer, with limited capacity to perform extensive review.

Perhaps the most novel concept proposed by Garattini and colleagues is that health technology assessment agencies that have dual roles as consultants for industry and government may produce biased analyses. It is unclear to what extent consulting for industry compromises the ability to remain unbiased, but it is worth investigating further. Conceivably, government agencies should require health technology assessment agencies to disclose the nature and extent of industry involvement in their activities on a regular basis.

Journals can play an important role in minimising bias in published economic analyses. Disclosure of financial conflicts of interest is already required, but disclosure alone cannot eliminate bias, even when the most stringent limits are applied.7 Alternatively, journals could increase the rigour of peer review by requiring that authors submit models along with their manuscript. This may be challenging because reviewers with sufficient technical expertise are scarce. Journals can also encourage greater transparency by requiring authors to provide appendices with technical details to be published on the web to overcome space limitations.

Guidelines for the conduct of cost effectiveness analyses, which currently focus on technical matters, can be expanded to provide more direction on optimising processes.8 9 For example, an independent advisory board could be required for industry sponsored studies at all stages of the analysis. Similar to the requirements for publication of clinical trials, registration of cost effectiveness analyses can serve the dual purpose of enforcing expanded process guidelines and counteracting publication bias.10 Standardising methods and promoting transparency in industry sponsored cost effectiveness analyses are attractive in theory, but it is unclear who would be responsible for oversight of such a policy.

Cost effectiveness analyses are helpful for those making health policy decisions. However, it has become clear that the association of industry with this discipline produces biased results for a variety of reasons. Scientific journals, insurers, and, most importantly, the scientists themselves must take clear and decisive steps to infuse more rigour and transparency into the analytical process. This may help uncouple the association between industry’s strategic use of cost effectiveness analysis early in product development with the later use of policy evaluation. Until then, government agencies and other healthcare funders will continue to view industry sponsored analyses with mistrust.

Notes

Cite this as: BMJ 2010;341:c5350

Footnotes

  • Competing interests: All authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare: no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; both authors have consulted for the Ontario government and federal government of Canada on issues of drug coverage and health technology assessment.

  • Provenance and peer review: Commissioned; not externally peer reviewed.

References