As a student of psychology, I was taught that meta-analysis exceeded all other forms of research. However, his view has been brought into question by a series of papers such as: Hennekens, C.H., & DeMets, D. (2009). The need for large-scale randomized evidence without undue emphasis on small trials, meta-analyses, or subgroup analyses. Journal of the American Medical Association, 302(21), 2361-2362.

Epidemiologist Charles Hennekens and biostatistician David DeMets have pointed out that combining small studies in a meta-analysis is not a good substitute for a single trial, sufficiently large enough to test a given question. “Meta-analyses can reduce the role of chance in the interpretation but may introduce bias and confounding variables.”

Pace, V.L., & Brannick, M.T. (2010). How similar are personality scales of the “same” construct? A meta-analytic investigation. Personality and Individual Difference, 49, 669-676.

Abstract

An underlying assumption of meta-analysis is that effect sizes are based on commensurate measures. If measures across studies do not have the same empirical meaning, then our theoretical understanding of relations among variables will be clouded. Two indicators of scale commensurability were examined for personality measures: (1) correlations among different scales with similar labels (e.g., different measures of extraversion) and (2) score reliability for different scales with similar labels. First, meta-analyses of correlations between many commonly used scales were computed, both including and excluding scales classified as non-Five-Factor Model measures. Second, subgroup meta-analyses of reliability were examined, with specific personality scales as moderators. Results reveal that assumptions of commensurability among personality measures may not be entirely met. Whereas meta-analyzed reliability coefficients did not differ greatly, scales of the ‘‘same” construct were only moderately correlated in many cases. Some improvement to this meta-analytic correlation occurred when measures were limited to those based on the Five-Factor Model. Questions remain about the similarity of personality construct conceptualization and operationalization.

Levine, T., Asada, K.J., & Carpenter, C. (2009). Sample sizes and effect sizes are negatively correlated in meta-analyses: Evidence and implications of a publication bias against non-significant findings. Communication Monographs, 76(3), 286-302.

Abstract

Meta-analysis involves cumulating effects across studies in order to qualitatively summarize existing literatures. A recent finding suggests that the effect sizes reported in meta-analyses may be negatively correlated with study sample sizes. This prediction was tested with a sample of 51 published meta-analyses summarizing the results of 3,602 individual studies. The correlation between effect size and sample size was negative in almost 80 percent of the meta-analyses examined, and the negative correlation was not limited to a particular type of research or substantive area. This result most likely stems from a bias against publishing findings that are not statistically significant. The primary implication is that meta-analyses may systematically overestimate population effect sizes. It is recommended that researchers routinely examine the n x r scatter plot and correlation, or some other indication of publication bias and report this information in meta-analyses.

While not entirely refuting the value of meta-analysis, these papers once again draw into contention commonly held views within our discipline. Moreover, they demonstrate that sophisticated data-combining methodologies are no substitute for a quality large-scale study, and assuming so may lead one to erroneous conclusions.