Introduction
What is new?
- •
To improve the interpretability of continuous data, we have previously described a method for pooling randomized trial data in minimally important difference (MID) units.
- •
Many instruments, however, do not have an established MID, and our method thus far omits these studies.
- •
Using the standard deviation ratio method described here, we have generated pooled estimates in MID units using all available data.
- •
The approach minimizes the likelihood of selection bias and provides reassurance that omission of trials without an established MID will not bias the result.
Individual randomized controlled trials (RCTs) often use different measurement instruments for the same construct such as disease-specific health-related quality of life (HRQL) or depression. When pooling such trials in meta-analyses, authors typically report differences between intervention and control in standard deviation (SD) units, often referred to as the standardized mean difference (SMD). This approach has statistical limitations (the same effect will appear different if population heterogeneity differs) and it is non intuitive for decision makers.
For instruments with an established minimally important difference (MID—the smallest difference patients experience as important), we have previously described the merits of reporting RCT results in relation to the MID both in individual studies [1] and in meta-analyses of RCTs using a single HRQL measure [2]. More recently, we have described an approach of reporting in MID units the pooled effects from meta-analysis of RCTs using more than one HRQL measure [3].
Reporting in MID units provides a potential solution to both the statistical and interpretational problems of reporting effects in SD units. Guidelines for interpreting MID units have been previously published [3]. The method, however, depends on a confident estimate of the MID. Anchor-based methods that examine the relationship between scores on a target instrument and some independent measure of what constitutes a small but patient-important change can provide the needed confidence, whereas the distribution-based methods that use statistical parameters associated with an instrument cannot provide the needed confidence [4], [5], [6].
These limitations in statistical approaches provide challenges for the estimation of effects in MID units in meta-analyses of results of different instruments. Those conducting systematic reviews of primary trials using multiple outcome measures will often encounter instruments for which an anchor-based MID remains unestablished.
One option to deal with this situation is to pool only the instruments that have an established anchor-based MID. This, however, will limit the power of analysis and will introduce bias if the trials that used instruments with established MIDs have underlying treatment effects that differ from those of trials using instruments for which only distribution-based MIDs are available. Dealing with this problem of power and possible selection bias requires a method of including instruments without established anchor-based MIDs. One solution to this problem is to pick a distribution-based approach that provides reasonable confidence of its relation to the MID and to use that approach for studies in which an anchor-based MID is unavailable. We explored the possibility of using such an approach.