Original Article
Specific instructions for estimating unclearly reported blinding status in randomized trials were reliable and valid

https://doi.org/10.1016/j.jclinepi.2011.04.015Get rights and content

Abstract

Objective

To test the reliability and validity of specific instructions to classify blinding, when unclearly reported in randomized trials, as “probably done” or “probably not done.”

Study Design and Setting

We assessed blinding of patients, health care providers, data collectors, outcome adjudicators, and data analysts in 233 randomized trials in duplicate and independently using detailed instructions. The response options were “definitely yes,” “probably yes,” “probably no,” and “definitely no.” We contacted authors for data verification (46% response). For each of the five questions, we assessed reliability by calculating the agreement between the two reviewers and validity by calculating the agreement between reviewers’ consensus and verified data.

Results

The percentage with unclear blinding status varied between 48.5% (patients) and 84.1% (data analysts). Reliability was moderate for blinding of outcome adjudicators (κ = 0.52) and data analysts (κ = 0.42) and substantial for blinding of patients (κ = 0.71), providers (κ = 0.68), and data collectors (κ = 0.65). The raw agreement between the consensus record and the author-verified record varied from 84.1% (blinding of data analysts) to 100% (blinding of health care providers).

Conclusion

With the possible exception of blinding of data analysts, use of “probably yes” and “probably no” instead of “unclear” may enhance the assessment of blinding in trials.

Introduction

What is new?

Key finding

  1. Specific instructions to estimate unclearly reported blinding status were valid and reliable.

What is new?
  1. Unclearly reported blinding status remains high in reports of randomized clinical trials.

Implications
  1. Specific instructions can help systematic reviewers estimate unclearly reported blinding status.

Knowing whether methodological safeguards were used in randomized clinical trials (RCTs) is important for interpreting the results. The assessment of methodological quality is, however, often hampered by omissions or lack of clarity in reporting. When reporting is suboptimal, clinicians, systematic reviewers, and guideline developers need to make judgments about whether investigators incorporated methodological safeguards against bias in their studies.

The Cochrane Collaboration [1] has attempted to overcome the ambiguity between the quality of reporting and the quality of the underlying research. The Cochrane risk of bias tool requires quoting what was reported to have happened in the study and then assigning a judgment relating to the risk of bias for each risk of bias item. The blinding status in the risk of bias table is classified as “yes” (indicating low risk of bias), “no” (indicating high risk of bias), or “unclear” (indicating unclear or unknown risk of bias).

Although the Cochrane handbook suggests supplementing an ambiguous quote with either a “probably done” or a “probably not done” statement, it does not provide specific guidance on how to make this judgment. The objective of this study was to test the reliability and validity of specific instructions to classify blinding, when unclearly reported in RCTs, as “probably done” or “probably not done.” We conducted this investigation in the context of the LOST to follow-up Information in Trials (LOST-IT) study [2].

Section snippets

LOST-IT study

LOST-IT is a methodological systematic review exploring the potential impact of loss to follow-up on estimates of treatment effect [2]. Studies were eligible if they reported RCTs published between 2005 and 2007 in the five top general medical journals and reported statistically significant effect estimates with respect to primary outcomes of importance for patients and expressed as binary data. The published protocol for LOST-IT provides detailed information about the methodology [2].

Review process

Reviewers

Results

We included 233 eligible randomized trials. We obtained author verification of our data abstraction for 107 reports (46%). Table 1 presents the following characteristics of the 233 reports included: publication year, journal, clinical area, type of outcome, type of intervention, and study size. We found no statistically significant differences between studies for which the data were and were not verified.

Table 2 presents the percentage of reports for which the blinding status was unclear

Discussion

Our results demonstrate that structured inferences regarding blinding status in randomized trials, even when not explicitly stated by authors, correspond closely to what authors report when asked explicitly about blinding in their trials. The reliability of blinding assessment using specific and detailed instructions was moderate for data analysts and outcome adjudicators, and almost perfect for patients, data collectors, and health care providers (Table 3). The blinding assessment proved

Acknowledgments

We thank Ann Grifasi, Deborah Maddock, Shelley Anderson, and Monica Owen for their administrative assistance. We thank Aravin Duraik for developing the study electronic forms. Pfizer, Inc. funded this study. The funder had no role in the study design, writing of the manuscript, or decision to submit this or future manuscripts for publication. Matthias Briel is supported by a scholarship for advanced researchers from the Swiss National Science Foundation (PASMA-112951/1) and the Roche Research

References (8)

There are more references available in the full text version of this article.

Cited by (121)

  • Allergen immunotherapy for atopic dermatitis: Systematic review and meta-analysis of benefits and harms

    2023, Journal of Allergy and Clinical Immunology
    Citation Excerpt :

    Supplement E2 in the Online Repository at www.jacionline.org provides further details. Reviewers independently and in duplicate rated risk of bias per outcome for each study using version 1 of the CLARITY-revised Cochrane Risk of Bias tool as being at low risk of bias, probably low risk of bias, probably high risk of bias, or high risk of bias.32,33 We considered a study to be at high risk of bias if at least 1 domain was high or probably high risk.

View all citing articles on Scopus

Authors’ contributions: Study concept and design: E.A.A., X.S., M.B., J.J.Y., F.L., M.A., G.H.G. Data collection: E.A.A., X.S., J.W.B., B.C.J., M.B., S.M., J.J.Y., D.B., F.L., C.V., M.A., C.M.K. Data analysis: D.H.A., Q.Z., E.A.A., G.H.G. Manuscript drafting: E.A.A. Critical revision of the manuscript and final approval: E.A.A., X.S., J.W.B., B.C.J., M.B., S.M., J.J.Y., D.B., F.L., C.V., M.A., C.M.K., E.M., G.H.G.

View full text