Please use this identifier to cite or link to this item: http://dx.doi.org/10.23668/psycharchives.2391
Title: Screen recordings as a documentation tool
Authors: Heycke, Tobias
Spitzer, Lisa
Issue Date: 12-Mar-2019
Publisher: ZPID (Leibniz Institute for Psychology Information)
Abstract: In recent years, many scientific fields have re-discovered the need for replication of scientific studies. Famously, in a large replication project in psychology only approximately 40 % of the selected 100 studies were replicated by independent researchers (Open Science Collaboration, 2015). Additional (large-scale) replication attempts in psychology have confirmed initial findings that many published findings could not be replicated independently (e.g., Hagger et al., 2016; R. Klein et al., 2014). Generally, one could propose two main reasons why a published finding cannot be replicated: First, the initial finding was merely a coincidence and the reported effect was not describing a natural phenomenon. Possible reasons for such a finding could be false positive statistical findings (which were possibly increased by p-hacking or other means of data massaging, Simmons, Nelson, & Simonsohn, 2011). Second, a finding might be based on a genuine phenomenon, but the effect was not replicated because essential details from the original experimental procedure were altered (assuming that the replication attempt had sufficient statistical power). When replicating a study as close as possible, researchers “should strive to avoid making any kind of change or alteration” (Earp & Trafimow, 2015, p. 5). Direct replications might be highly informative, especially when the original results cannot be replicated in independent replications attempts. Specifically, not finding the original effect with a different method can easily be attributed to the method rather than the original effect (Doyen, Klein, Simons, & Cleeremans, 2014). Therefore, when a replication attempt does not succeed to find the original pattern of results, researchers might speculate whether the difference in the results could be due to subtle differences in the experimental procedure or critical changes that were introduced by the replicators (Freese & Peterson, 2017). Practically, in (psychological) science post-hoc arguments can always be given to argue why studies could not replicate original findings. One might therefore be tempted to dismiss any post-hoc argument explaining why a replication attempt might have failed. However, what if effects indeed depend on small changes to the experimental procedure that are -so far- not understood by the scientific community? It should be considered highly problematic if small changes might lead to differences in the outcomes of an experiment, especially when researchers are potentially not aware of which details might be important and which not (see for example Alogna et al., 2014). If this is indeed the case, these details might therefore not be reported in the written manuscript and we can only speculate if a non-replication might depend on one of these details. It is, however, simply not feasible to repeat an experiment with all combinations of potentially important methodological details. One recommendation that appears to solve the above-mentioned problems, is to provide the research material to reviewers and post them publicly after publication (Asendorpf et al., 2013; Lindsay, 2017). The transparency and openness promotion (TOP) guidelines for example propose that “Materials must be posted to a trusted repository, and reported analyses will be reproduced independently before publication” as the highest level of transparency of research materials (Nosek et al., 2015, p. 1424). In theory, uploading all materials to a public repository would solve most problems discussed previously. However, in our opinion, there are a number of potential problems related to merely uploading experimental procedure scripts and material: First, one needs to possess the software the script was written for in order to run it. Unfortunately, many software solutions that are currently used are not freely available and the scripts can therefore not be run by every independent researcher. Second, even if one owns the software, or a freeware was used, the software version might have changed and the procedure might therefore look differently or the script does not run at all. Third, even if the software is still up to date and the researcher has access to it, he might not be acquainted with it and it may be time consuming to set up the script even when detailed instructions are provided. Even when running a replication with the original experimental script and material, it would be beneficial to know how the final procedure should look like. We therefore argue that there is still a need for better documentations of the research methods. We propose that the experimental procedure should be recorded by means of screen capture and the video should be made available to others (e.g., by uploading it to a public repository). This way, the procedure is easy to access by reviewers, peers interested in the procedure and researchers interested in replicating the work. Importantly, screen recordings will not be affected by software changes, that produce a different look with the same experimental script. Additionally, researchers do not need to acquire and set up software solutions in order to have a look at the procedure that is likely more detailed than the description in the (published) manuscript. Especially referees in the peer review process would benefit highly from this documentation of the research method to inform themselves about the experimental procedure. We have therefore created a tutorial on the open source screen recording software OBS (osf.io/3twe9). We would like to engage researchers in a discussion on possibilities to better document their experimental procedures and see this tutorial as a first starting point.
URI: https://hdl.handle.net/20.500.12034/2023
http://dx.doi.org/10.23668/psycharchives.2391
Citation: Heycke, T., & Spitzer, L. (2019, March 12). Screen recordings as a documentation tool. ZPID (Leibniz Institute for Psychology Information). https://doi.org/10.23668/psycharchives.2391
Appears in Collections:Conference Object

Files in This Item:
File Description SizeFormat 
d_4_slides_trier_heycke_spitzer.pdfConference Talk741,93 kBAdobe PDF Preview PDF Download


This item is licensed under a Creative Commons License Creative Commons