What is new?
Key finding- •
Comparison of empiric data demonstrates that the recently described ratio of means (RoM) method for pooling continuous outcomes in meta-analysis produces similar treatment effects and no large differences in heterogeneity when compared with traditionally used mean difference (MD) and standardized mean difference (SMD) methods.
What this adds to what is known?- •
RoM does not suffer from some of the clinical limitations of MD methods, including the inability to handle outcomes expressed in different units (applicable to MD) and required knowledge of the pooled standard deviation—a quantity generally unknown to clinicians—for interpretation (applicable to SMD).
What is the implication and what should change now?- •
Similar to binary outcome meta-analysis, for which ratio methods are commonly used, this study now also provides a ratio method option for meta-analysis of continuous outcomes.
Meta-analysis is a method of statistically combining results of similar studies, often randomized controlled trials [1]. For meta-analysis of continuous outcomes, the most commonly used measure of treatment effect is the difference in means [2]. If the outcome of interest is measured in identical units across trials, then the effect measure of choice for each trial is the difference in means and the pooled effect measure is the weighted average of mean differences (MDs). If the outcome of interest is measured in different units, then each trial’s effect measure is the difference in mean values divided by the pooled standard deviation (SD) of the two groups and the pooled effect measure is the weighted average of standardized mean differences (SMDs). In contrast, for binary outcomes, both difference (risk difference) and ratio (odds ratio and risk ratio) methods are commonly used.
We recently proposed and used a new ratio method to meta-analyze continuous outcomes, in which we calculated a ratio of means (RoM) (defined as the mean value in the experimental group divided by the mean value in the control group) instead of a difference for each study [3], [4], [5], [6]. Others have used this method [7], [8], [9] and incorporated it in freely available meta-analysis software [10]. As an illustration, Table 1 shows pooled continuous data using MD, SMD, and RoM from two meta-analyses [4], [11]. The three methods give similar results. The point estimates are similar in direction (i.e., a positive MD or SMD corresponds to RoM greater than 1, whereas a negative MD or SMD corresponds to RoM less than 1) and yield similar treatment effect P-values. In addition, statistical heterogeneity, measured as I2, the percentage of total variation in results across studies due to heterogeneity rather than chance [12], [13], is also similar. Equations for calculating RoM and a worked example are provided in the Appendix.
Advantages of RoM include the ability to pool studies with outcomes expressed in different units (vs. MD) and ease of clinical interpretation (vs. SMD) because it does not require knowledge of the pooled SD, a quantity generally unknown to clinicians. For example, the second meta-analysis in Table 1 describes the effect of acetaminophen on osteoarthritis pain [11]. MD cannot be calculated because the studies used different pain scores. Although SMD is easily calculated and shows that acetaminophen significantly decreases overall pain by 0.25 pooled SD units, it is difficult to communicate the importance of this effect to individual patients. In contrast, pooling data with RoM generates a more easily interpretable 15% decrease in overall pain. A disadvantage of RoM is that it requires the means of continuous variables in all trials included in a meta-analysis to have the same sign. Although essentially all biological continuous outcomes have positive values, meta-analyses may be used to pool changes in continuous outcomes over time or investigator-generated scales, which may be positive or negative.
We have previously demonstrated comparable statistical performance (bias, coverage, power, and heterogeneity) of RoM compared with SMD and MD using simulation methods [14]. However, in addition to statistical properties, the choice between a difference and ratio method for a specific situation should also be determined by the biological effect of the treatment as either additive or relative for different control group values. Whether absolute or relative changes are more preserved across studies can be determined through empirical comparisons. For binary outcomes, empirical comparisons between difference methods (risk difference) and ratio methods (risk ratio and odds ratio) using published meta-analyses have shown higher heterogeneity of risk difference [15], [16], suggesting that relative differences are more preserved than absolute differences as baseline risk varies. The objective of this study was to conduct a similar empirical comparison of treatment effects and heterogeneity of the ratio method, RoM, to the difference methods (MD and SMD) in a broad range of published meta-analyses of continuous outcomes.