How to detect publication bias in psychological research? A comparative evaluation of six statistical methods
Author(s) / Creator(s)
Renkewitz, Frank
Keiner, Melanie
Abstract / Description
Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates observed in the social sciences. Both of these problems do not only increase the proportion of false positives in the literature but can also lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect and correct such bias in meta-analytic results. We present an evaluation of the performance of six of these tools in detecting bias. To assess the Type I error rate and the statistical power of these tools we simulated a large variety of literatures that differed with regard to underlying true effect size, heterogeneity, number of available primary studies and variation of sample sizes in these primary studies. Furthermore, simulated primary studies were subjected to different degrees of publication bias. Our results show that the power of the detection methods follows a complex pattern. Across all simulated conditions, no method consistently outperformed all others. Hence, choosing an optimal method would require knowledge about parameters (e.g., true effect size, heterogeneity) that meta-analysts cannot have. Additionally, all methods performed badly when true effect sizes were heterogeneous or primary studies had a small chance of being published irrespective of their results. This suggests, that in many actual meta-analyses in psychology bias will remain undiscovered no matter which detection method is used.
Persistent Identifier
Date of first publication
2019-03-14
Is part of
Open Science 2019, Trier, Germany
Publisher
ZPID (Leibniz Institute for Psychology Information)
Citation
Renkewitz, F., & Keiner, M. (2019, March 14). How to detect publication bias in psychological research? A comparative evaluation of six statistical methods. ZPID (Leibniz Institute for Psychology Information). https://doi.org/10.23668/psycharchives.2400
-
n_2_renkewitz_OpenScience2019_Trier-2.pdfAdobe PDF - 524.62KBMD5: 2032009c803b07dc1494522647a7f4f6Description: Conference Talk
-
There are no other versions of this object.
-
Author(s) / Creator(s)Renkewitz, Frank
-
Author(s) / Creator(s)Keiner, Melanie
-
PsychArchives acquisition timestamp2019-04-03T13:03:39Z
-
Made available on2019-04-03T13:03:39Z
-
Date of first publication2019-03-14
-
Abstract / DescriptionPublication biases and questionable research practices are assumed to be two of the main causes of low replication rates observed in the social sciences. Both of these problems do not only increase the proportion of false positives in the literature but can also lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect and correct such bias in meta-analytic results. We present an evaluation of the performance of six of these tools in detecting bias. To assess the Type I error rate and the statistical power of these tools we simulated a large variety of literatures that differed with regard to underlying true effect size, heterogeneity, number of available primary studies and variation of sample sizes in these primary studies. Furthermore, simulated primary studies were subjected to different degrees of publication bias. Our results show that the power of the detection methods follows a complex pattern. Across all simulated conditions, no method consistently outperformed all others. Hence, choosing an optimal method would require knowledge about parameters (e.g., true effect size, heterogeneity) that meta-analysts cannot have. Additionally, all methods performed badly when true effect sizes were heterogeneous or primary studies had a small chance of being published irrespective of their results. This suggests, that in many actual meta-analyses in psychology bias will remain undiscovered no matter which detection method is used.en_US
-
CitationRenkewitz, F., & Keiner, M. (2019, March 14). How to detect publication bias in psychological research? A comparative evaluation of six statistical methods. ZPID (Leibniz Institute for Psychology Information). https://doi.org/10.23668/psycharchives.2400en
-
Persistent Identifierhttps://hdl.handle.net/20.500.12034/2032
-
Persistent Identifierhttps://doi.org/10.23668/psycharchives.2400
-
Language of contentengen_US
-
PublisherZPID (Leibniz Institute for Psychology Information)en_US
-
Is part ofOpen Science 2019, Trier, Germanyen_US
-
Dewey Decimal Classification number(s)150
-
TitleHow to detect publication bias in psychological research? A comparative evaluation of six statistical methodsen_US
-
DRO typeconferenceObjecten_US
-
Visible tag(s)ZPID Conferences and Workshops