Availability of large-scale evidence on specific harms from systematic reviews of randomized trials

https://doi.org/10.1016/j.amjmed.2004.04.026Get rights and content

Abstract

Purpose

To assess how frequently systematic reviews of randomized controlled trials convey large-scale evidence on specific, well-defined adverse events.

Methods

We searched the Cochrane Database of Systematic Reviews for reviews containing quantitative data on specific, well-defined harms for at least 4000 randomized subjects, the minimum sample required for adequate power to detect an adverse event due to an intervention in 1% of subjects. Main outcome measures included the number of reviews with eligible large-scale data on adverse events, the number of ineligible reviews, and the magnitude of recorded harms (absolute risk, relative risk) based on large-scale evidence.

Results

Of 1727 reviews, 138 included evidence on ≥4000 subjects. Only 25 (18%) had eligible data on adverse events, while 77 had no harms data, and 36 had data on harms that were nonspecific or pertained to <4000 subjects. Of 66 specific adverse events for which there were adequate data in the 25 eligible reviews, 25 showed statistically significant differences between comparison arms; most pertained to serious or severe adverse events and absolute risk differences <4%. In 29% (9/31) of a sample of large trials in reviews with poor reporting of harms, specific harms were presented adequately in the trial reports but were not included in the systematic reviews.

Conclusion

Systematic reviews can convey useful large-scale information on adverse events. Acknowledging the importance and difficulties of studying harms, reporting of adverse effects must be improved in both randomized trials and systematic reviews.

Section snippets

Methods

We selected systematic reviews of randomized controlled trials for which there were quantitative data on the occurrence of at least one very specific adverse event in each study and per study arm for ≥4000 subjects, and for which a formal meta-analysis of these data had been performed. The cutoff of 4000 subjects was decided a priori. With 4000 subjects assigned randomly and approximately equally to two intervention arms, there is about 80% power to detect an adverse event due to an

Results

Of 1754 systematic reviews, 27 had been withdrawn and 1589 did not have at least 4000 randomized subjects. There were 138 systematic reviews with a sample size of ≥4000 for at least one quantitative comparison (Table 1). Of those, only 25 (18%) provided eligible data on specific harms.

Among reviews with at least 4000 subjects but no eligible data on harms, most did not provide separate quantitative data on any adverse events (specific or not), but about a third provided some information on

Discussion

Systematic reviews of randomized controlled trials rarely provide large-scale evidence on specific, well-defined adverse events associated with the tested interventions. Only 25 systematic reviews in the entire Cochrane Database of Systematic Reviews contained such evidence on at least one type of harm. More than three quarters of systematic reviews with at least 4000 subjects in randomized trials lacked such information. The Cochrane Library is known for the high quality of its reviews (5, 6)

Acknowledgment

We are thankful to Professor Jan P. Vandenbroucke, Department of Clinical Epidemiology, University of Leiden Medical School, Leiden, The Netherlands, for discussing our study protocol and for critical reading and suggestions for the discussion in the manuscript.

References (17)

There are more references available in the full text version of this article.

Cited by (60)

  • Adverse Effects of Psychotropic Medications: A Call to Action

    2016, Psychiatric Clinics of North America
    Citation Excerpt :

    Sometimes investigators or sponsors may change how adverse events are assessed or reported (either the definition of the adverse event or the method of assessment) from commonly used approaches. This may alter the conclusions, eg, the conclusion using an idiosyncratic way of assessing or reporting adverse effects may be that one medication has a better adverse event profile than another.17 There are 4 general approaches to identify adverse effects, but each of them has significant limitations.

  • Basic study design influences the results of orthodontic clinical investigations

    2015, Journal of Clinical Epidemiology
    Citation Excerpt :

    Estimations of a treatment's adverse effects may be prone to different biases than its efficacy [32]. RCTs may not be large enough or may not have sufficiently large follow-up to identify some long-term harms [33,41–44]. Moreover, generalizability of the RCTs' results may be limited for various reasons [45]; for example, as high-risk patients are often excluded from trials [32,33,46].

  • Side effects are incompletely reported among systematic reviews in gastroenterology

    2015, Journal of Clinical Epidemiology
    Citation Excerpt :

    There is no standardized validated tool to assess harms reporting in systematic reviews. We pilot tested a number of previously published instruments [1–3,5,20,21] and chose and adapted an extension of the Consolidated Standards of Reporting Trials statement developed specifically for harms reporting [17] as this checklist captures many aspects (such as clearly defining adverse events, providing numerical data on incidence, and recording methods of data collection [20,22]) (Table 1). One researcher (S.E.M.) used a standardized spreadsheet to extract study characteristics as well as quantitative and qualitative parameters of harms reporting.

View all citing articles on Scopus
View full text