Intended for healthcare professionals

Research

Case reports of suspected adverse drug reactions—systematic literature survey of follow-up

BMJ 2006; 332 doi: https://doi.org/10.1136/bmj.38701.399942.63 (Published 09 February 2006) Cite this as: BMJ 2006;332:335
  1. Yoon Kong Loke, senior lecturer in clinical pharmacology (y.loke{at}uea.ac.uk)1,
  2. Deirdre Price, research assistant2,
  3. Sheena Derry, research assistant2,
  4. Jeffrey K Aronson, reader in clinical pharmacology2
  1. 1 School of Medicine, Health Policy, and Practice, University of East Anglia, Norwich NR4 7TJ
  2. 2 Department of Clinical Pharmacology, University of Oxford, Radcliffe Infirmary, Oxford OX2 6HE
  1. Correspondence to: Y Loke
  • Accepted 4 October 2005

Abstract

Objective To determine whether anecdotal reports of suspected adverse drug reactions are valuable early warning signals.

Design Systematic literature survey

Data sources We evaluated all case reports of adverse drug reactions published in 1997 in five medical journals. Reports were excluded if the adverse reaction had previously been described in earlier publications and was already listed in the product information of the drug reference source (the British National Formulary (BNF) or the Medicines Compendium). We used the Web of Knowledge Citation Index and Medline for 2003 to identify follow-up studies.

Main outcome measures Primary: the number of suspected adverse reactions subjected to formal validation studies and the findings of these studies. Secondary: the number of instances in which the warning from the case report was incorporated into the product information.

Results We evaluated 63 suspected adverse reactions and found that most (52/63, 83%) had not yet been subjected to further detailed evaluation. Data from controlled studies that supported the postulated link between the drug and the adverse event were available in only three cases. Of the 48 agents listed in the drug reference sources, details of the suspected reaction were subsequently added to the Medicines Compendium in 15 instances, and to the BNF in seven instances. In each case, only one reaction had been confirmed.

Conclusions Published case reports of suspected adverse reactions are of limited value as suspicions are seldom subjected to confirmatory investigation. Furthermore, these alerts are not incorporated into drug reference sources in a systematic manner.

Introduction

Case reports of suspected adverse drug reactions are common in the medical literature—for example, more than a thousand anecdotes were cited in the Side Effects of Drugs Annual (2000) in one year alone.1 While information on drug safety is of unquestionable importance, the profusion of case reports and the marked variation in their quality2 3 creates a challenging conundrum. Should physicians and patients alter their treatment plans in response to every fresh report of a suspected adverse reaction?

In this instance, opinion is divided. Hoffman, in his role as the editor of the Western Journal of Medicine, argues that case reports are of extremely limited value and that it would be foolhardy to translate the information into clinical practice without stronger evidence.4 The derailment of the measles, mumps, and rubella immunisation programme by uncorroborated anecdotes lends weight to Hoffman's view that such reports may do “more harm than good.” In his defence of case reports, however, Vandenbroucke argues that these anecdotes are vital early warnings that raise suspicions and spark further confirmatory investigations.5 Research carried out by Venning in the 1980s6 is sometimes cited as an example of the “amazingly good” predictive accuracy of case reports,5 in that “more than half of suspected adverse drug reactions were confirmed by subsequent, more detailed research.”7 Venning's findings, however, have not been replicated, and authorities in evidence based medicine have expressed disappointment at this lack of further research.7

How then can we be reassured that case reports of adverse drug reactions are genuinely valuable information resources? For a start, we need to be certain that the suspicions raised in such anecdotes are consistently validated by further research. Moreover, an early warning alert is of limited value if the information comes to the attention of only the restricted readership of learned medical journals. Are the safety concerns from such reports communicated to clinicians and patients via the commonly used drug information sources?

Methods

We retrieved published case reports of suspected adverse drug reactions and established whether each case report had been followed by more definitive studies. We determined the results of any follow-up studies and whether the warning signal from the report had been incorporated into subsequent versions of published drug information.

Sources of case reports

From our previous work we were aware that case reports of adverse drug reactions in general medicine, neurology, and psychiatry are often cited in the Side Effects of Drugs Annual.1 We therefore chose four high impact journals that regularly publish case reports on these specific issues: two general medical journals (BMJ and Lancet) and two specialist journals (Neurology and American Journal of Psychiatry). To achieve greater diversity, we also included a haematology journal that publishes articles on adverse drug reactions (American Journal of Hematology) as there are fewer anecdotal reports of adverse reactions in haematology.1

We examined all case reports of adverse drug reactions from the selected journals over one year. We chose 1997 because we estimated that a lapse of five years or more from the date of publication would allow sufficient time for the impact, if any, of these reports to have filtered through to generate additional detailed studies or updating of product information and advice, or both.

Identification and selection of case reports

We searched Medline using the following search string: (“journal title” in SO) and (case-report in TG) and (py = 1997). We evaluated the titles and abstracts (when available) of the retrieved articles and excluded those that were clearly not case reports of adverse drug reactions. We then checked the full texts of the remaining articles for relevance based on previously published criteria (the stated purpose of the article was to provide a case report of a suspected adverse drug reaction and the format was consistent with that of a suspected adverse drug reaction report to the Committee on Safety of Medicines, United Kingdom ((yellow card system)).8

We then compiled a dataset of “suspected” adverse reactions by excluding those cases for which previous reports of the adverse reaction were found in a Medline search using the adverse event term and drug name and the adverse reaction was already listed in the product datasheet of the 1996-7 Medicines Compendium9 or the September 1996 issue of the British National Formulary (BNF).10

Primary outcome measure—validation of the suspected adverse drug reaction

To determine whether individual case reports fulfilled their role as early warning signals that stimulated more detailed studies we used two methods to establish whether such additional validation studies had been carried out.

Firstly, we carried out a “cited reference” search of the Web of Knowledge Citation Index (April 2003). One reviewer (YKL) checked to see if each case report had been cited by another published article. We believed that a follow-up study to investigate newly reported adverse drug reactions would usually cite the original reports in its reference list.

We examined the citing articles to determine which studies were carried out for the specific purpose of validating the suspected adverse reaction. Accepted design criteria for validation studies included studies in which the rate of the suspected adverse reaction was specifically assessed, these ranged from observational studies on cohorts of patients to controlled clinical trials; and hypothesis testing research into the role of a putative mechanism in the development of the adverse effect, this included in vitro laboratory studies and in vivo tests in patients exposed to the drug.

The second method allowed for the possibility that there might be studies in which the suspected adverse reaction had been evaluated without the original case report being cited. We checked Medline 1998-2003 using the adverse drug reaction term and drug name to identify any additional validation studies.

Secondary outcome measure

We also wanted to determine whether the publication of an adverse drug reaction case report in a learned journal contributes to the information used by clinicians and patients. We compared all product listings in the subsequent years against those of 1996 to determine whether the suspected adverse effect had been added to the product information after the publication of the anecdotal report. The drug reference sources were issues of the Medicines Compendium published from 1996 to 2002 and its electronic version from 20039 and issues of the BNF No 32 (September 1996) to No 45 (September 2003).10

As unlicensed and experimental agents are not listed in either of these two reference sources, we limited the analysis of our secondary outcome measure to drugs with a valid UK product licence from 1996 onwards. We also excluded adverse events that had already been listed in the Medicines Compendium and the BNF before the publication of the case report.

Results

We identified 696 case reports from the Medline search. Subsequent detailed evaluation showed that 63 met the criteria for inclusion as reports of suspected new adverse drug reactions (figure).w1-w63

Figure1

Flow chart of selection of reports and assessment of outcomes

Primary outcome measure—studies validating adverse drug reaction reports

From the citation index, we found that 56 of the 63 case reports had been cited at least once. However, for only nine of the 56 reports were the citing articles considered to fulfil the criteria for validation studies.

Table 1 gives details of the nine reports and a synopsis of the further research. There were only three instances in which the follow-up studies provided controlled data that supported the hypothesised link between the drug and the adverse event: clarithromycin-disopyramide interaction; indinavir and lipomatosis; and vigabatrin and visual field defects. In contrast, detailed studies on acarbose repeatedly failed to confirm a risk of hepatotoxicity. This leaves five suspected adverse reactions for which the validation studies did not provide any controlled data for us to reach conclusions.

Table 1

Case reports of suspected adverse reactions that had been subjected to further evaluation: case reports that had been cited by validation studies (identified through the Web of Knowledge Citation Index)

View this table:

By searching Medline after 1997, we identified validation studies that evaluated the postulated link between drug and adverse event in two of the 1997 case reports (table 2). The 1997 case reports, however, were not cited by these validation studies, and it is possible that the later investigations may have been instigated by other factors.

Table 2

Case reports of suspected adverse reactions that had been subjected to further evaluation: two case reports were subjects of validation studies (from Medline search), although original case report was not cited

View this table:

It is worth noting that all of the 11 suspected adverse reactions that had been further evaluated were published in general medical journals—that is, the BMJ and the Lancet.

Secondary outcome measure—changes in published product information

We evaluated 48 datasheets and monographs to see whether they had been updated with the information from the case report (figure). By October 2003, 15 product data sheets in the Medicines Compendium had been amended to include details of the suspected adverse reaction. Only two of the 15 adverse drug reactions had been subjected to follow-up evaluation. By September 2003 (No 45) seven monographs in the BNF had been revised. We identified follow-up studies for three of these adverse drug reactions.

There were five products for which the information on adverse effects had been revised in both the Medicines Compendium and the BNF. Of these five, we found only two that had follow-up studies. In one instance, the clarithromycin-disopyramide interaction was supported by controlled data from a laboratory study. In contrast, the link between acarbose and liver toxicity was not confirmed. Nevertheless, both reference sources have added hepatotoxicity to the list of adverse effects for acarbose, even though the published evidence suggests otherwise.

Discussion

Case reports of suspected adverse reactions are common in medical journals, but the value of such anecdotes remains far from certain. From a broader perspective, anecdotal reports should serve to initiate further research.5 Anecdotes need to be confirmed or refuted, rather than being lost or adopted into medical mythology without additional evaluation. In our study, however, 83% of reports of suspected new adverse drug reactions from 1997 had not been subjected to any further validation. This finding contrasts sharply with the findings of Venning, who thought that only 26% of new adverse reactions had been left unverified.6 This discrepancy merits closer scrutiny.

Venning looked at 47 case reports of adverse drug reactions in four general medical journals and various criteria (based on site of reaction, time course, pharmacological plausibility, and effects of repeated administration) to assess the validity of the postulated link between drug and adverse event. He concluded that 28 of the 47 anecdotes were “convincing” and needed no further study.6 Meyboom and colleagues challenged the reliability of such an approach and pointed out that this method cannot reliably prove a causal link.11 Studies have shown that assessors of adverse events were often unable to reach complete agreement with each other when judging the strength of a causal link and determining the culprit drug.12 13 To avoid these pitfalls, we stipulated that the suspected reaction needed to have been evaluated by a more formal study.

Venning's initial evaluation left 19 reports of adverse reactions unconfirmed, and he proceeded to search the subsequent literature and reference sources (published papers, regulatory authority databases, and textbooks of adverse drug reactions) for additional information about any of these anecdotes. From this, he judged that seven of the 19 had subsequently been “satisfactorily verified” and were “generally accepted.”6 This method is of uncertain validity as Venning provided no details about whether his decisions were based on further case reports, expert opinion in textbooks, or formal evaluation of safety. In contrast, we defined “validation” studies explicitly.

We are also concerned about the haphazard manner in which adverse reaction reports are transmitted into product information, leaving clinicians and patients poorly informed by existing reference sources. We found that under half of anecdotal reports led to updates. This may have been because of the lack of data confirming the link between the drug and the adverse event. Manufacturers might justifiably argue that in the absence of a more definitive study, they are right not to include the adverse drug reaction in the datasheet. On the other hand, in some instances (such as acarbose) both the compendium and BNF entries were altered, despite the lack of evidence in subsequent studies.

The hit and miss nature of the problem is further illustrated by our finding that more than twice as many product listings were altered in the Medicines Compendium as in the BNF. The editorial content of the BNF is the responsibility of a joint formulary committee, whereas pharmaceutical companies work together with regulatory authorities to draw up product information for the compendium. How can prescribers and patients negotiate a path between benefit and harm when the updating of product information does not conform to any clear pattern of accumulation of evidence?

Limitations of our study

The anecdotal reports of adverse drug reactions that we analysed may not be a representative sample as we studied only one year and the journals were not randomly selected. Most of the case reports, however, came from journals with high impact factors, giving them a higher profile and thus the greatest chance of being followed up. In any case, suspected adverse reactions that have a major impact on decisions about treatment14 should be investigated, irrespective of the journal or year of publication.

It could also be argued that our follow-up period of five years was too short, especially compared with Venning's analysis, which encompassed 18 years. However, Venning included only 19 reports of adverse drug reactions in this long term search, and he had already classified 28 reactions without any further checking. Moreover, we consider that five years is sufficient time for further studies to be carried out and the results published, especially if the adverse reactions had been considered important enough to be reported in a high impact medical journal. Indeed the single case report on vigabatrin that we identified stimulated 34 detailed studies, while the report on hepatitis induced by sindinavir stimulated 15 published studies, all within five years.

What is already known on this topic

Anecdotal reports of suspected adverse drug reactions are common in the medical literature and are thought to have a valuable role in providing early warning alerts

Some evidence shows that these case reports have good predictive accuracy and that the suspicions are often confirmed to be valid on further evaluation

What this study adds

Anecdotal reports are of limited value as the suspected reactions are seldom subjected to confirmatory investigation

The warning signals from these case reports are not systematically incorporated into commonly used drug information sources

We recognise that we may not have identified all relevant validation studies, even though such studies may have been carried out. Studies performed by pharmaceutical companies or regulatory authorities but not published form one category of possible omissions. Alternatively, if a validation study did not cite the original case report, we would not have found it through the citation index search. We took steps to address this by conducting a parallel Medline search, but we are aware that computerised searches for adverse effects do not pick up all relevant articles.15

Conclusions

Although published reports of suspected adverse drug reactions have their uses,3 they are of limited value because suspicions are seldom investigated further. Moreover, the alerts are not consistently incorporated into drug reference sources, and the nature of the information available to physicians and patients is therefore not readily interpretable.

It seems that Venning's call for a “systematic policy of investigating first alerts” has gone unheeded in the past 20 years. Who is responsible for verifying these reactions? Is it regulatory authorities, drug companies, or independent research teams? Stricker and Psaty argue that firm leadership from regulatory authorities is needed and point out that drug companies seldom have any economic incentives to investigate such problems.16 Regulatory pressure and our litigious society, however, may drive companies to devote resources to the formal investigation of suspected reactions. Stricker and Psaty also recommend that the current emphasis on spontaneous reporting be shifted towards hypothesis testing studies. Given the expense of such studies and the relative paucity of funding, we propose that regulatory authorities and pharmaceutical companies should jointly fund independent academic research into suspected adverse drug reactions. Awards of compensation claims against manufacturers who fail to investigate anecdotal reports of adverse reactions to their drugs might act as an additional incentive.

We also recommend the development of a consistent and transparent policy on how information from case reports might (or might not) be incorporated into published drug information, such as in Summaries of Product Characteristics, the BNF, and the Physician's Desk Reference, and how information should be identified as being anecdotal or formally verified. In the meantime, readers of case reports will have to live with the challenging task of understanding the balance between known beneficial effects and unsubstantiated harms.14

Footnotes

  • Embedded ImageReferences to the 63 included case reports (w1-w63) and nine validation studies (w64-w72) are onbmj.com

  • Contributors YKL developed the original idea and the protocol, abstracted and analysed data, wrote the manuscript, and is guarantor. DP and SD contributed to the development of the protocol, abstracted data, and prepared the manuscript. JKA developed the protocol and helped with the manuscript.

  • Funding SD was supported by a grant from the Sir Jules Thorne Trust.

  • Competing interests None declared.

  • Ethical approval Not required.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
View Abstract