Estimating the mean effect size in meta-analysis: bias, precision, and mean squared error of different weighting methods

Behav Res Methods Instrum Comput. 2003 Nov;35(4):504-11. doi: 10.3758/bf03195529.

Abstract

Although use of the standardized mean difference in meta-analysis is appealing for several reasons, there are some drawbacks. In this article, we focus on the following problem: that a precision-weighted mean of the observed effect sizes results in a biased estimate of the mean standardized mean difference. This bias is due to the fact that the weight given to an observed effect size depends on this observed effect size. In order to eliminate the bias, Hedges and Olkin (1985) proposed using the mean effect size estimate to calculate the weights. In the article, we propose a third alternative for calculating the weights: using empirical Bayes estimates of the effect sizes. In a simulation study, these three approaches are compared. The mean squared error (MSE) is used as the criterion by which to evaluate the resulting estimates of the mean effect size. For a meta-analytic dataset with a small number of studies, the MSE is usually smallest when the ordinary procedure is used, whereas for a moderate or large number of studies, the procedures yielding the best results are the empirical Bayes procedure and the procedure of Hedges and Olkin, respectively.

MeSH terms

  • Behavioral Research / methods*
  • Data Interpretation, Statistical*
  • Humans
  • Meta-Analysis as Topic*
  • Models, Statistical*