Improving the Science in Replication Sciences
Author(s) / Creator(s)
Steiner, Peter
Abstract / Description
Given the central role of replication in the accumulation of scientific knowledge, researchers have reevaluated the robustness of seemingly well established findings across multiple disciplines. Results from these efforts have not been promising. The Open Science Collaboration (OSC) replicated 100 experimental and correlational studies published in high impact psychology journals and found that only 36% of these efforts produced results with the same statistical significance pattern as the original study. In general, most researchers consider 36% as a disappointingly low replication rate. But is it really reasonable to expect higher replication rates? And under which conditions can one actually expect a successful replication? In order to critically assess these questions, we present a novel framework for replication studies that outlines all the assumptions that are required to reproduce (within the limits of sampling error) the causal effect reported by an original study. In contrast to most other replication frameworks, our framework does not rely on procedural replication aspects (i.e., using same methods, instruments, etc.) but on the assumptions that need to be met by both the original and replication study to successfully reproduce a causal effect. The long list of strong assumptions makes it very clear that researchers should not be too optimistic about successful replications of effects. In addition, we will show that our replication framework can also be used to design studies that go beyond the direct replication of an effect but instead aim at investigating effect heterogeneities, generalizing causal effects, or assessing the performance of non-experimental methods.
Keyword(s)
Replication Reproducibility Causal InferencePersistent Identifier
Date of first publication
2018
Is part of
Die Rolle der psychologischen Methodenlehre in der Open Science Debatte 2018, Berlin, Germany
Publisher
ZPID (Leibniz Institute for Psychology Information)
Citation
-
PresOpenScienceSteiner.pdfAdobe PDF - 1.01MBMD5: 7e7cabdeb22bf69022eb12d5e977e87eDescription: Conference Talk
-
There are no other versions of this object.
-
Author(s) / Creator(s)Steiner, Peter
-
PsychArchives acquisition timestamp2018-06-28T13:55:09Z
-
Made available on2018-06-28T13:55:09Z
-
Date of first publication2018
-
Abstract / DescriptionGiven the central role of replication in the accumulation of scientific knowledge, researchers have reevaluated the robustness of seemingly well established findings across multiple disciplines. Results from these efforts have not been promising. The Open Science Collaboration (OSC) replicated 100 experimental and correlational studies published in high impact psychology journals and found that only 36% of these efforts produced results with the same statistical significance pattern as the original study. In general, most researchers consider 36% as a disappointingly low replication rate. But is it really reasonable to expect higher replication rates? And under which conditions can one actually expect a successful replication? In order to critically assess these questions, we present a novel framework for replication studies that outlines all the assumptions that are required to reproduce (within the limits of sampling error) the causal effect reported by an original study. In contrast to most other replication frameworks, our framework does not rely on procedural replication aspects (i.e., using same methods, instruments, etc.) but on the assumptions that need to be met by both the original and replication study to successfully reproduce a causal effect. The long list of strong assumptions makes it very clear that researchers should not be too optimistic about successful replications of effects. In addition, we will show that our replication framework can also be used to design studies that go beyond the direct replication of an effect but instead aim at investigating effect heterogeneities, generalizing causal effects, or assessing the performance of non-experimental methods.en_US
-
Publication statusunknown
-
Review statusnotReviewed
-
Persistent Identifierhttps://hdl.handle.net/20.500.12034/665
-
Persistent Identifierhttps://doi.org/10.23668/psycharchives.867
-
Language of contentengen_US
-
PublisherZPID (Leibniz Institute for Psychology Information)en_US
-
Is part ofDie Rolle der psychologischen Methodenlehre in der Open Science Debatte 2018, Berlin, Germany
-
Keyword(s)Replicationen_US
-
Keyword(s)Reproducibilityen_US
-
Keyword(s)Causal Inferenceen_US
-
Dewey Decimal Classification number(s)150
-
TitleImproving the Science in Replication Sciencesen_US
-
DRO typeconferenceObjecten_US