Code

Addressing publication bias in meta-analysis: Empirical findings from community-augmented meta-analyses of infant language development

Code for: Addressing publication bias in meta-analysis: Empirical findings from community-augmented meta-analyses of infant language development

Author(s) / Creator(s)

Tsuji, Sho
Cristia, Alejandrina
Frank, Michael C.
Bergmann, Christina

Abstract / Description

Code for: Tsuji, S., Cristia, A., Frank, M. C., & Bergmann, C. (2020). Addressing publication bias in meta-analysis: Empirical findings from community-augmented meta-analyses of infant language development. Zeitschrift für Psychologie, 228(1), 50–61. https://doi.org/10.1027/2151-2604/a000393
Meta-analyses have long been an indispensable research synthesis tool for characterizing bodies of literature and advancing theories. However, they have been facing the same challenges as primary literature in the context of the replication crisis: A meta-analysis is only as good as the data it contains,and which data end up in the final sample can be influenced at various stages of the process. Early on, the selection of topic and search strategies might be biased by the meta-analyst’s subjective decision. Further,publication bias towards significant outcomes in primary studies might skew the search outcome, wheregrey, unpublished literature might not show up. Additional challenges might arise during data extraction from articles in the final search sample, for example since some articles might not contain sufficient detail for computing effect sizes and correctly characterizing moderator variables, or due to specific decisions of the meta-analyst during data extraction from multi-experiment papers.Community-augmented meta-analyses (CAMAs, Tsuji, Bergmann, & Cristia, 2014) have received increasing interest as a tool for countering the above-mentioned problems. CAMAs are open-access, online meta-analyses. In the original proposal, they allow the use and addition of data points by the research community, enabling to collectively shape the scope of a meta-analysis and encouraging the submission of unpublished or inaccessible data points. As such, CAMAs can counter biases introduced by data (in)availability and by the researcher. In addition, their dynamic nature serves to keep a meta-analysis, otherwise crystallized at the time of publication and quickly outdated, up to date.We have now been implementing CAMAs over the past four years in MetaLab(metalab.stanford.edu), a database gathering meta-analyses in Developmental Psychology and focused on infancy. Meta-analyses are updated through centralized, active curation.We here describe our successes and failures with gathering missing data, as well as quantify how the addition of these data points changes the outcomes of meta-analyses. First, we ask which strategies to counter publication bias are fruitful. To answer this question we evaluate efforts to gather data not readily accessible by database searches, which applies both to unpublished literature and to data not reported in published articles. Based on this investigation, we conclude that classical tools like database and citation searches can already contribute an important amount of grey literature. Furthermore, directly contacting authors is a fruitful way to get access to missing information. We then address whether and how including or excluding grey literature from a selection of meta-analyses impacts results, both in terms of indices of publication bias and in terms of main meta-analytic outcomes. Here, we find no differences in funnel plot asymmetry, but (as could be expected) a decrease in meta-analytic effect sizes. Based on these experiences, we finish with lessons learned and recommendations that can be generalized for meta-analysts beyond the field of infant research in order to get the most out of the CAMA framework and to gather maximally unbiased dataset.

Keyword(s)

developmental psychology effect sizes grey literature Meta-analysis

Persistent Identifier

Date of first publication

2019

Publisher

PsychArchives

Is referenced by

Citation

Tsuji, S., Cristia, A., Frank, M. C., & Bergmann, C. (2019). Addressing publication bias in meta-analysis: Empirical findings from community-augmented meta-analyses of infant language development. PsychArchives. https://doi.org/10.23668/psycharchives.2562
  • Author(s) / Creator(s)
    Tsuji, Sho
  • Author(s) / Creator(s)
    Cristia, Alejandrina
  • Author(s) / Creator(s)
    Frank, Michael C.
  • Author(s) / Creator(s)
    Bergmann, Christina
  • PsychArchives acquisition timestamp
    2019-08-26T08:17:06Z
  • Made available on
    2019-08-26T08:17:06Z
  • Date of first publication
    2019
  • Abstract / Description
    Code for: Tsuji, S., Cristia, A., Frank, M. C., & Bergmann, C. (2020). Addressing publication bias in meta-analysis: Empirical findings from community-augmented meta-analyses of infant language development. Zeitschrift für Psychologie, 228(1), 50–61. https://doi.org/10.1027/2151-2604/a000393
    en_US
  • Abstract / Description
    Meta-analyses have long been an indispensable research synthesis tool for characterizing bodies of literature and advancing theories. However, they have been facing the same challenges as primary literature in the context of the replication crisis: A meta-analysis is only as good as the data it contains,and which data end up in the final sample can be influenced at various stages of the process. Early on, the selection of topic and search strategies might be biased by the meta-analyst’s subjective decision. Further,publication bias towards significant outcomes in primary studies might skew the search outcome, wheregrey, unpublished literature might not show up. Additional challenges might arise during data extraction from articles in the final search sample, for example since some articles might not contain sufficient detail for computing effect sizes and correctly characterizing moderator variables, or due to specific decisions of the meta-analyst during data extraction from multi-experiment papers.Community-augmented meta-analyses (CAMAs, Tsuji, Bergmann, & Cristia, 2014) have received increasing interest as a tool for countering the above-mentioned problems. CAMAs are open-access, online meta-analyses. In the original proposal, they allow the use and addition of data points by the research community, enabling to collectively shape the scope of a meta-analysis and encouraging the submission of unpublished or inaccessible data points. As such, CAMAs can counter biases introduced by data (in)availability and by the researcher. In addition, their dynamic nature serves to keep a meta-analysis, otherwise crystallized at the time of publication and quickly outdated, up to date.We have now been implementing CAMAs over the past four years in MetaLab(metalab.stanford.edu), a database gathering meta-analyses in Developmental Psychology and focused on infancy. Meta-analyses are updated through centralized, active curation.We here describe our successes and failures with gathering missing data, as well as quantify how the addition of these data points changes the outcomes of meta-analyses. First, we ask which strategies to counter publication bias are fruitful. To answer this question we evaluate efforts to gather data not readily accessible by database searches, which applies both to unpublished literature and to data not reported in published articles. Based on this investigation, we conclude that classical tools like database and citation searches can already contribute an important amount of grey literature. Furthermore, directly contacting authors is a fruitful way to get access to missing information. We then address whether and how including or excluding grey literature from a selection of meta-analyses impacts results, both in terms of indices of publication bias and in terms of main meta-analytic outcomes. Here, we find no differences in funnel plot asymmetry, but (as could be expected) a decrease in meta-analytic effect sizes. Based on these experiences, we finish with lessons learned and recommendations that can be generalized for meta-analysts beyond the field of infant research in order to get the most out of the CAMA framework and to gather maximally unbiased dataset.
  • Citation
    Tsuji, S., Cristia, A., Frank, M. C., & Bergmann, C. (2019). Addressing publication bias in meta-analysis: Empirical findings from community-augmented meta-analyses of infant language development. PsychArchives. https://doi.org/10.23668/psycharchives.2562
    en
  • Persistent Identifier
    https://hdl.handle.net/20.500.12034/2186
  • Persistent Identifier
    https://doi.org/10.23668/psycharchives.2562
  • Language of content
    eng
    en_US
  • Publisher
    PsychArchives
    en_US
  • Is referenced by
    https://doi.org/10.1027/2151-2604/a000393
  • Is related to
    https://osf.io/preprints/metaarxiv/q5axy/
  • Is related to
    https://doi.org/10.23668/psycharchives.2561
  • Is related to
    https://doi.org/10.23668/psycharchives.2470
  • Is related to
    https://doi.org/10.1027/2151-2604/a000393
  • Keyword(s)
    developmental psychology
  • Keyword(s)
    effect sizes
  • Keyword(s)
    grey literature
  • Keyword(s)
    Meta-analysis
  • Dewey Decimal Classification number(s)
    150
  • Title
    Addressing publication bias in meta-analysis: Empirical findings from community-augmented meta-analyses of infant language development
    en_US
  • Alternative title
    Code for: Addressing publication bias in meta-analysis: Empirical findings from community-augmented meta-analyses of infant language development
    en_US
  • DRO type
    code
    en_US