Conference Object

How to detect publication bias in psychological research? A comparative evaluation of six statistical methods

Author(s) / Creator(s)

Renkewitz, Frank
Keiner, Melanie

Abstract / Description

Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates observed in the social sciences. Both of these problems do not only increase the proportion of false positives in the literature but can also lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect and correct such bias in meta-analytic results. We present an evaluation of the performance of six of these tools in detecting bias. To assess the Type I error rate and the statistical power of these tools we simulated a large variety of literatures that differed with regard to underlying true effect size, heterogeneity, number of available primary studies and variation of sample sizes in these primary studies. Furthermore, simulated primary studies were subjected to different degrees of publication bias. Our results show that the power of the detection methods follows a complex pattern. Across all simulated conditions, no method consistently outperformed all others. Hence, choosing an optimal method would require knowledge about parameters (e.g., true effect size, heterogeneity) that meta-analysts cannot have. Additionally, all methods performed badly when true effect sizes were heterogeneous or primary studies had a small chance of being published irrespective of their results. This suggests, that in many actual meta-analyses in psychology bias will remain undiscovered no matter which detection method is used.

Persistent Identifier

Date of first publication

2019-03-14

Is part of

Open Science 2019, Trier, Germany

Publisher

ZPID (Leibniz Institute for Psychology Information)

Citation

Renkewitz, F., & Keiner, M. (2019, March 14). How to detect publication bias in psychological research? A comparative evaluation of six statistical methods. ZPID (Leibniz Institute for Psychology Information). https://doi.org/10.23668/psycharchives.2400
  • Author(s) / Creator(s)
    Renkewitz, Frank
  • Author(s) / Creator(s)
    Keiner, Melanie
  • PsychArchives acquisition timestamp
    2019-04-03T13:03:39Z
  • Made available on
    2019-04-03T13:03:39Z
  • Date of first publication
    2019-03-14
  • Abstract / Description
    Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates observed in the social sciences. Both of these problems do not only increase the proportion of false positives in the literature but can also lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect and correct such bias in meta-analytic results. We present an evaluation of the performance of six of these tools in detecting bias. To assess the Type I error rate and the statistical power of these tools we simulated a large variety of literatures that differed with regard to underlying true effect size, heterogeneity, number of available primary studies and variation of sample sizes in these primary studies. Furthermore, simulated primary studies were subjected to different degrees of publication bias. Our results show that the power of the detection methods follows a complex pattern. Across all simulated conditions, no method consistently outperformed all others. Hence, choosing an optimal method would require knowledge about parameters (e.g., true effect size, heterogeneity) that meta-analysts cannot have. Additionally, all methods performed badly when true effect sizes were heterogeneous or primary studies had a small chance of being published irrespective of their results. This suggests, that in many actual meta-analyses in psychology bias will remain undiscovered no matter which detection method is used.
    en_US
  • Citation
    Renkewitz, F., & Keiner, M. (2019, March 14). How to detect publication bias in psychological research? A comparative evaluation of six statistical methods. ZPID (Leibniz Institute for Psychology Information). https://doi.org/10.23668/psycharchives.2400
    en
  • Persistent Identifier
    https://hdl.handle.net/20.500.12034/2032
  • Persistent Identifier
    https://doi.org/10.23668/psycharchives.2400
  • Language of content
    eng
    en_US
  • Publisher
    ZPID (Leibniz Institute for Psychology Information)
    en_US
  • Is part of
    Open Science 2019, Trier, Germany
    en_US
  • Dewey Decimal Classification number(s)
    150
  • Title
    How to detect publication bias in psychological research? A comparative evaluation of six statistical methods
    en_US
  • DRO type
    conferenceObject
    en_US
  • Visible tag(s)
    ZPID Conferences and Workshops