Please use this identifier to cite or link to this item: http://dx.doi.org/10.23668/psycharchives.2400
Full metadata record
DC FieldValueLanguage
dc.rights.licenseCC-BY-SA 4.0en_US
dc.contributor.authorRenkewitz, Frank-
dc.contributor.authorKeiner, Melanie-
dc.date.accessioned2019-04-03T13:03:39Z-
dc.date.available2019-04-03T13:03:39Z-
dc.date.issued2019-03-14-
dc.identifier.citationRenkewitz, F., & Keiner, M. (2019, March 14). How to detect publication bias in psychological research? A comparative evaluation of six statistical methods. ZPID (Leibniz Institute for Psychology Information). https://doi.org/10.23668/psycharchives.2400en
dc.identifier.urihttps://hdl.handle.net/20.500.12034/2032-
dc.identifier.urihttp://dx.doi.org/10.23668/psycharchives.2400-
dc.description.abstractPublication biases and questionable research practices are assumed to be two of the main causes of low replication rates observed in the social sciences. Both of these problems do not only increase the proportion of false positives in the literature but can also lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect and correct such bias in meta-analytic results. We present an evaluation of the performance of six of these tools in detecting bias. To assess the Type I error rate and the statistical power of these tools we simulated a large variety of literatures that differed with regard to underlying true effect size, heterogeneity, number of available primary studies and variation of sample sizes in these primary studies. Furthermore, simulated primary studies were subjected to different degrees of publication bias. Our results show that the power of the detection methods follows a complex pattern. Across all simulated conditions, no method consistently outperformed all others. Hence, choosing an optimal method would require knowledge about parameters (e.g., true effect size, heterogeneity) that meta-analysts cannot have. Additionally, all methods performed badly when true effect sizes were heterogeneous or primary studies had a small chance of being published irrespective of their results. This suggests, that in many actual meta-analyses in psychology bias will remain undiscovered no matter which detection method is used.en_US
dc.language.isoengen_US
dc.publisherZPID (Leibniz Institute for Psychology Information)en_US
dc.relation.ispartofOpen Science 2019, Trier, Germanyen_US
dc.rightsopenAccessen_US
dc.rights.urihttps://creativecommons.org/licenses/by-sa/4.0/en_US
dc.subject.ddc150-
dc.titleHow to detect publication bias in psychological research? A comparative evaluation of six statistical methodsen_US
dc.typeconferenceObjecten_US
Appears in Collections:Conference Object

Files in This Item:
File Description SizeFormat 
n_2_renkewitz_OpenScience2019_Trier-2.pdfConference Talk512,33 kBAdobe PDF Preview PDF Download


This item is licensed under a Creative Commons License Creative Commons