Conference Object

Multilevel meta-analysis of complex single-case designs: Rawdata versus effect sizes

Author(s) / Creator(s)

Declercq, Lies
Jamshidi, Laleh
Van den Noortgate, Wim

Abstract / Description

Background: In a single-case experimental design (SCED), a dependent variable is manipulated and repeatedly measured within a single subject or unit, to verify the effect of the manipulations (‘treatments’) on that variable (Onghena & Edgington, 2005). Typically, reports on SCED studies include scatterplots of the time series for one or more observed cases, making the raw SCED data readily available for meta-analysis. In raw SCED data obtained from multiple cases in one or more SCED studies, dependency is present in the data due to a nested hierarchical structure: measurements are nested within cases, which in turn are nested within studies. To account for this nesting, Van den Noortgate and (2003) proposed a hierarchical linear model with three levels to synthesize raw SCED data across cases. If the raw data are not available, Van den Noortgate and Onghena (2008) illustrate an alternative approach to statistically combine effect sizes from SCED studies. They propose an alternative standardized mean difference as an effect size to express the effect of the treatment for a particular case. These effect sizes are then combined in a three-level meta-analytical model. Objectives, research questions and hypotheses: In a simulation study, we want to compare both multilevel approaches for synthesizing SCED data: the multilevel analysis of SCED raw data (RD approach) versus the multilevel analysis of SCED effect sizes (ES approach). For three models of increasing complexity, we simulate datasets and apply both approaches. For more complex models, the three-level models involve more regression coefficients and therefore more parameters to estimate. As such, the ES approach has an important potential benefit over the RD approach: the multilevel model estimated based on the effect sizes is reduced, so there are less parameters to estimate. This might result in faster estimation procedures and better convergence rates compared to the RD approach. However, a drawback of the ES approach is the loss of information by reducing the rich raw data to effect sizes. It is not clear if the reduction in data combined with the smaller model in the ES approach will result in better or worse performance compared to the RD approach. Therefore we compare the performance of both approaches in this simulation study by assessing the quality of the estimations, the convergence rate and the efficiency of both. Method: A basic single-case design involves two phases, a baseline phase and a treatment phase. The most basic multilevel model for this type of data models a constant baseline level and an effect of the treatment on that level. Both coefficients are assumed to vary randomly around an overall mean at three levels due to 1) random sampling, 2) variation across participants and 3) variation across studies. Alternatively to applying such a three-level model to the raw data (RD approach), a three-level model can also be applied to SCED effect sizes (ES approach). To calculate such effect sizes, we follow the approach proposed by Van den Noortgate and Onghena (2008) where we first obtain case-specific effect sizes, which are subsequently used in a three-level meta-analytic model to estimate the overall treatment effect. In this simulation study, we generate raw SCED data from three models: the simple intercept-only model described above (model 1), a linear time trend model with a slope in both phases (model 2), and a quadratic time trend model (model 3). For each model we simulate 1000 datasets and apply the two approaches: we fit a three-level model directly onto the raw data (RD approach) and we use the raw data to first calculate effect sizes and then we fit a three-level model onto the effect sizes (ES approach). Note that for models 2 and 3, where the treatment has an effect on not only on the intercept (the constant) but also on the linear coefficient (models 2 and 3) and the quadratic coefficient (model 3). Therefore the ES approach requires a multivariate three-level model to simultaneously model two or three effect sizes. Results: In terms of convergence, the ES approach performs well for all three models with convergence rates of 98% or higher. The RD approach performs slightly worse for model 2 but really breaks down for model 3, were only about half of the simulations converge. Convergence is especially bad for datasets with larger sample sizes. In terms of absolute speed the comparison between both approaches depends of course on the software and the system used. The simulation was implemented in R with lme4 (Bates, Mächler, Bolker, & Walker, 2014) for the RD approach and metafor (Viechtbauer, 2010) for the ES approach. With identical settings for the optimizer and the maximum number of function evaluations for both approaches, the RD approach was faster for fitting complex models to small datasets. However, a single model fit took almost always less than a minute, so any difference between approaches might be negligible in practice. In terms of quality of the estimations, the fixed effect estimations were unbiased for both approaches and they had identically small mean squared errors (MSE’s). However, the ES approach resulted in CI’s which were consistently too narrow and Type I error rates which were consistently too high. For the variance components, the ES approach estimations where less biased than those from the RD approach. Conclusions: Both approaches provide reliable point estimates no matter the underlying model complexity. However, when using effect sizes in a three-level meta-analytic model, inference results might be unreliable. This is in line with previous research and several different adjustments and alternative testing procedures have been proposed and compared to accommodate this problem (Sánchez-Meca & Marín-Martínez, 2008). With more complex models the raw data approach tends to throw convergence warnings and errors. Based on our findings, we can confirm that the effect size approach is a reasonable alternative when SCED raw data are not available. Caution is however advised when performing unadjusted Wald-type z- or t-tests on the overall effect sizes when effect sizes were used instead of raw data, because these tests lead to unreliable confidence intervals and p-values. References: Bates, D., Mächler, M., Bolker, B., & Walker, S. (2014). Fitting Linear Mixed-Effects Models using lme4, 67(1). https://doi.org/10.18637/jss.v067.i01 Onghena, P., & Edgington, E. S. (2005). Customization of pain treatments: single-case design and analysis. The Clinical Journal of Pain, 21(1), 56–68. https://doi.org/10.1097/00002508-200501000-00007 Sánchez-Meca, J., & Marín-Martínez, F. (2008). Confidence Intervals for the Overall Effect Size in Random-Effects Meta-Analysis. Psychological Methods, 13(1), 31–48. https://doi.org/10.1037/1082-989X.13.1.31 Van den Noortgate, W., & Onghena, P. (2003). Combining single-case experimental data using hierarchical linear models. School Psychology Quarterly, 18(3), 325–346. https://doi.org/10.1521/scpq.18.3.325.22577 Van den Noortgate, W., & Onghena, P. (2008). A multilevel meta-analysis of single-subject experimental design studies. Evidence-Based Communication Assessment and Intervention, 2(3), 142–151. https://doi.org/10.1080/17489530802505362 Viechtbauer, W. (2010). Conducting Meta-Analyses in R with the metafor Package. Journal of Statistical Software, 36(3), 1–48. https://doi.org/10.1103/PhysRevB.91.121108

Persistent Identifier

Date of first publication

2019-05-31

Is part of

Research Synthesis 2019 incl. Pre-Conference Symposium Big Data in Psychology, Dubrovnik, Croatia

Publisher

ZPID (Leibniz Institute for Psychology Information)

Citation

Declercq, L., Jamshidi, L., & Van Den Noortgate, W. (2019, May 31). Multilevel meta-analysis of complex single-case designs: Rawdata versus effect sizes. ZPID (Leibniz Institute for Psychology Information). https://doi.org/10.23668/psycharchives.2482
  • Author(s) / Creator(s)
    Declercq, Lies
  • Author(s) / Creator(s)
    Jamshidi, Laleh
  • Author(s) / Creator(s)
    Van den Noortgate, Wim
  • PsychArchives acquisition timestamp
    2019-06-14T09:40:25Z
  • Made available on
    2019-06-14T09:40:25Z
  • Date of first publication
    2019-05-31
  • Abstract / Description
    Background: In a single-case experimental design (SCED), a dependent variable is manipulated and repeatedly measured within a single subject or unit, to verify the effect of the manipulations (‘treatments’) on that variable (Onghena & Edgington, 2005). Typically, reports on SCED studies include scatterplots of the time series for one or more observed cases, making the raw SCED data readily available for meta-analysis. In raw SCED data obtained from multiple cases in one or more SCED studies, dependency is present in the data due to a nested hierarchical structure: measurements are nested within cases, which in turn are nested within studies. To account for this nesting, Van den Noortgate and (2003) proposed a hierarchical linear model with three levels to synthesize raw SCED data across cases. If the raw data are not available, Van den Noortgate and Onghena (2008) illustrate an alternative approach to statistically combine effect sizes from SCED studies. They propose an alternative standardized mean difference as an effect size to express the effect of the treatment for a particular case. These effect sizes are then combined in a three-level meta-analytical model. Objectives, research questions and hypotheses: In a simulation study, we want to compare both multilevel approaches for synthesizing SCED data: the multilevel analysis of SCED raw data (RD approach) versus the multilevel analysis of SCED effect sizes (ES approach). For three models of increasing complexity, we simulate datasets and apply both approaches. For more complex models, the three-level models involve more regression coefficients and therefore more parameters to estimate. As such, the ES approach has an important potential benefit over the RD approach: the multilevel model estimated based on the effect sizes is reduced, so there are less parameters to estimate. This might result in faster estimation procedures and better convergence rates compared to the RD approach. However, a drawback of the ES approach is the loss of information by reducing the rich raw data to effect sizes. It is not clear if the reduction in data combined with the smaller model in the ES approach will result in better or worse performance compared to the RD approach. Therefore we compare the performance of both approaches in this simulation study by assessing the quality of the estimations, the convergence rate and the efficiency of both. Method: A basic single-case design involves two phases, a baseline phase and a treatment phase. The most basic multilevel model for this type of data models a constant baseline level and an effect of the treatment on that level. Both coefficients are assumed to vary randomly around an overall mean at three levels due to 1) random sampling, 2) variation across participants and 3) variation across studies. Alternatively to applying such a three-level model to the raw data (RD approach), a three-level model can also be applied to SCED effect sizes (ES approach). To calculate such effect sizes, we follow the approach proposed by Van den Noortgate and Onghena (2008) where we first obtain case-specific effect sizes, which are subsequently used in a three-level meta-analytic model to estimate the overall treatment effect. In this simulation study, we generate raw SCED data from three models: the simple intercept-only model described above (model 1), a linear time trend model with a slope in both phases (model 2), and a quadratic time trend model (model 3). For each model we simulate 1000 datasets and apply the two approaches: we fit a three-level model directly onto the raw data (RD approach) and we use the raw data to first calculate effect sizes and then we fit a three-level model onto the effect sizes (ES approach). Note that for models 2 and 3, where the treatment has an effect on not only on the intercept (the constant) but also on the linear coefficient (models 2 and 3) and the quadratic coefficient (model 3). Therefore the ES approach requires a multivariate three-level model to simultaneously model two or three effect sizes. Results: In terms of convergence, the ES approach performs well for all three models with convergence rates of 98% or higher. The RD approach performs slightly worse for model 2 but really breaks down for model 3, were only about half of the simulations converge. Convergence is especially bad for datasets with larger sample sizes. In terms of absolute speed the comparison between both approaches depends of course on the software and the system used. The simulation was implemented in R with lme4 (Bates, Mächler, Bolker, & Walker, 2014) for the RD approach and metafor (Viechtbauer, 2010) for the ES approach. With identical settings for the optimizer and the maximum number of function evaluations for both approaches, the RD approach was faster for fitting complex models to small datasets. However, a single model fit took almost always less than a minute, so any difference between approaches might be negligible in practice. In terms of quality of the estimations, the fixed effect estimations were unbiased for both approaches and they had identically small mean squared errors (MSE’s). However, the ES approach resulted in CI’s which were consistently too narrow and Type I error rates which were consistently too high. For the variance components, the ES approach estimations where less biased than those from the RD approach. Conclusions: Both approaches provide reliable point estimates no matter the underlying model complexity. However, when using effect sizes in a three-level meta-analytic model, inference results might be unreliable. This is in line with previous research and several different adjustments and alternative testing procedures have been proposed and compared to accommodate this problem (Sánchez-Meca & Marín-Martínez, 2008). With more complex models the raw data approach tends to throw convergence warnings and errors. Based on our findings, we can confirm that the effect size approach is a reasonable alternative when SCED raw data are not available. Caution is however advised when performing unadjusted Wald-type z- or t-tests on the overall effect sizes when effect sizes were used instead of raw data, because these tests lead to unreliable confidence intervals and p-values. References: Bates, D., Mächler, M., Bolker, B., & Walker, S. (2014). Fitting Linear Mixed-Effects Models using lme4, 67(1). https://doi.org/10.18637/jss.v067.i01 Onghena, P., & Edgington, E. S. (2005). Customization of pain treatments: single-case design and analysis. The Clinical Journal of Pain, 21(1), 56–68. https://doi.org/10.1097/00002508-200501000-00007 Sánchez-Meca, J., & Marín-Martínez, F. (2008). Confidence Intervals for the Overall Effect Size in Random-Effects Meta-Analysis. Psychological Methods, 13(1), 31–48. https://doi.org/10.1037/1082-989X.13.1.31 Van den Noortgate, W., & Onghena, P. (2003). Combining single-case experimental data using hierarchical linear models. School Psychology Quarterly, 18(3), 325–346. https://doi.org/10.1521/scpq.18.3.325.22577 Van den Noortgate, W., & Onghena, P. (2008). A multilevel meta-analysis of single-subject experimental design studies. Evidence-Based Communication Assessment and Intervention, 2(3), 142–151. https://doi.org/10.1080/17489530802505362 Viechtbauer, W. (2010). Conducting Meta-Analyses in R with the metafor Package. Journal of Statistical Software, 36(3), 1–48. https://doi.org/10.1103/PhysRevB.91.121108
    en_US
  • Citation
    Declercq, L., Jamshidi, L., & Van Den Noortgate, W. (2019, May 31). Multilevel meta-analysis of complex single-case designs: Rawdata versus effect sizes. ZPID (Leibniz Institute for Psychology Information). https://doi.org/10.23668/psycharchives.2482
    en
  • Persistent Identifier
    https://hdl.handle.net/20.500.12034/2108
  • Persistent Identifier
    https://doi.org/10.23668/psycharchives.2482
  • Language of content
    eng
    en_US
  • Publisher
    ZPID (Leibniz Institute for Psychology Information)
    en_US
  • Is part of
    Research Synthesis 2019 incl. Pre-Conference Symposium Big Data in Psychology, Dubrovnik, Croatia
    en_US
  • Dewey Decimal Classification number(s)
    150
  • Title
    Multilevel meta-analysis of complex single-case designs: Rawdata versus effect sizes
    en_US
  • DRO type
    conferenceObject
    en_US
  • Visible tag(s)
    ZPID Conferences and Workshops