|dc.description.abstract||When Glass coined the term meta-analysis (MA) in 1976, he exclusively referred to a type of meta-analysis that today is known as aggregate person data (APD) meta-analysis. In recent years, another type of meta-analysis has gained popularity that is referred to as individual person data (IPD) meta-analysis (Riley et al., 2010; Burke et al., 2017). IPD meta-analysis utilizes the raw, participant-level data by pooling multiple datasets, e.g., original data from different trials in medicine or surveys in the social sciences. So far, IPD meta-analysis has been utilized in the medical sciences (Jeng et al.,1995; McCormack et al., 2004; Palmerini et al., 2015; Rogozinska et al., 2017) or psychology (Cuijpers et al., 2014; Gu et al., 2015; Karyotaki et al., 2015). In these disciplines, most original studies focus on some sort of treatment or intervention effect and apply experimental research designs to come to causal conclusions. In contrast, many epidemiological, sociological or economic studies are non-experimental, i.e., observational studies or based on survey data. When analyzing non-experimental data, researchers have to take into account confounding bias and cannot rely on simple bivariate effect sizes. Instead, the focus shifts to more sophisticated methods, e.g., regression models. The “effect sizes” of interest are now regression slopes of focal predictors on an outcome variable (Becker and Wu, 2007; Aloe and Thompson, 2013). However, it poses a challenge to estimate IPD meta-analyses of regression coefficients with survey-based data. In contrast to experimental data, survey-based data is subject to complex sampling like stratification of the population and cluster sampling. To account for complex sampling schemes or endogenous sampling, survey-based data often comes with survey weights ranging from design-based weights to nonresponse weights, as well as post-stratification weights. These weights can be used to receive approximately unbiased populations estimates. Survey-weighted regressions are located between the two classical inferential frameworks, model- (Fisher, 1922) and design-based (Neyman, 1934) inference. Until now, the literature on IPD meta-analysis with complex survey data is sparse. So, even though IPD meta-analysis can be considered the “gold standard” in evidence-driven research, it is yet unclear how to deal with non-experimental, survey-based data that is subject to complex sampling. We systematically explore when and how to use survey weighting in regression-based analyses in combination with different IPD meta-analytical approaches. We will build up on the work done by DuMouchel and Duncan (1983) and Solon et al. (2013) for survey weighted regression analysis. We will show through Monte Carlo simulations that endogenous sampling and heterogeneity of effects models require survey weighting to receive approximately unbiased estimates in the meta-analytical case. Even though most researchers primarily aim for approximately unbiased estimates, it is not recommended to use weights "just in case." Weights can increase the variance of meta-analytical estimates quite dramatically. Second, we focus on a list of methodological questions: Do survey weighted one-stage, and two-stage meta-analysis perform differently? How do we deal with weighted surveys which have different observation numbers – is it necessary to transform the weights? Is it possible to include random effects into survey weighted meta-analysis, especially if we have to assume study heterogeneity? Another challenging methodological question is the inclusion of random effects in a one-stage meta-analysis. Our simulations show that two-stage IPD meta-analysis will be biased if the variation in the weights is high, whereas one-stage IPD meta-analysis remains unbiased. We show that researchers can improve the efficiency of their one-stage IPD analysis if they transform their weights with one of the transformations Korn and Graubard (1999) proposed. The scaling is beneficial in the case of surveys with different sample sizes. We also show that the inclusion of random effects in a one-stage meta-analysis is challenging but doable. Transformation of weights is needed in most cases. References: Aloe, A. M. and Thompson, C. G. (2013). The Synthesis of Partial Effect Sizes. Journal of the Society for Social Work and Research, 4(4):390–405.
Burke, D. L., Ensor, J., and Riley, R. D. (2017). Meta-analysis Using Individual Participant Data: One-stage and Two-stage Approaches, and Why They May Differ. Statistics in Medicine, 36(5):855–875.
Becker, B. J. and Wu, M.-J. (2007). The Synthesis of Regression Slopes in Meta-Analysis. Statistical Science, 22(3):414–429.
Cuijpers, P., Weitz, E., Twisk, J., Kuehner, C., Cristea, I., David, D., DeRubeis, R. J., Dimidjian, S., Dunlop, B. W., Faramarzi, M., Hegerl, U., Jarrett, R. B., Kennedy, S. H., Kheirkhah, F., Mergl, R., Miranda, J., Mohr, D. C., Segal, Z. V., Siddique, J., Simons, A. D., Vittengl, J. R., and Hollon, S. D. (2014). Gender as Predictor and Moderator of Outcome in Cognitive Behaviour Therapy and Pharmacotherapy for Adult Depression: An "Individual Patient Data" Metaanalysis. Depression and Anxiety, 31(11):941–951. DuMouchel, W. H. and Duncan, G. J. (1983). Using Sample Survey Weights in Multiple Regression Analyses of Stratified Samples. Journal of the American Statistical Association, 78(383):535–543.
Fisher, R. (1922). On the Mathematical Foundations of Theoretical Statistics. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 222(594-604):309–368.
Gu, J., Strauss, C., Bond, R., and Cavanagh, K. (2015). How do Mindfulness Based Cognitive Therapy and Mindfulness-based Stress Reduction Improve Mental Health and Wellbeing? A Systematic Review and Meta-analysis of Mediation Studies. Clinical Psychology Review, 37:1 – 12.
Jeng, G., Scott, J., and Burmeister, L. (1995). A Comparison of Meta-analytic Results Using Literature vs Individual Patient Data: Paternal Cell Immunization for Recurrent Miscarriage. JAMA, 274(10):830–836.
Karyotaki, E., Kleiboer, A., Smit, F., Turner, D. T., Pastor, A. M., Andersson, G., Berger, T., Botella, C., Breton, J. M., Carlbring, P., and et al. (2015). Predictors of treatment dropout in self-guided web-based interventions for depression: an "individual patient data" meta-analysis. Psychological Medicine, 45(13):2717–2726.
Korn, E. L. and Graubard, B. I. (1999). Analyses Using Multiple Surveys. In Korn, E. L. and Graubard, B. I., editors, Analysis of Health Surveys, chapter 8, pages 278–303. Wiley-Blackwell.
McCormack, K., Grant, A., and Scott, N. (2004). Value of Updating a Systematic Review in Surgery Using Individual Patient Data. BJS, 91(4):495–499.
Neyman, J. (1934). On the Two Different Aspects of the Representative Method: The Method of Stratified Sampling and the Method of Purposive Selection. Journal of the Royal Statistical Society, 97(4):558–625.
Palmerini, T., Sangiorgi, D., Valgimigli, M., Biondi-Zoccai, G., Feres, F., Abizaid, A., Costa, R. A., Hong, M.-K., Kim, B.-K., Jang, Y., Kim, H.-S., Park, K. W., Mariani, A., Riva, D. D., Genereux, P., Leon, M. B., Bhatt, D. L., Bendetto, U., Rapezzi, C., and Stone, G. W. (2015). Short- Versus Long-term Dual Antiplatelet Therapy After Drug-eluting Stent Implantation: An Individual Patient Data Pairwise and Network Meta-analysis. Journal of the American College of Cardiology, 65(11):1092 – 1102.
Riley, R. D., Lambert, P. C., and Abo-Zaid, G. (2010). Meta-analysis of Individual Participant Data: Rationale, Conduct, and Reporting. BMJ, 340:c221.
Rogozinska, E., Marlin, N., Thangaratinam, S., Khan, K. S., and Zamora, J. (2017). Meta-analysis Using Individual Participant Data from Randomised Trials: Opportunities and Limitations Created by Access to Raw Data. BMJ Evidence-Based Medicine, 22(5):157–162.
Solon, G., Haider, S. J., and Wooldridge, J. (2013). What Are We Weighting For? Working Paper 18859, National Bureau of Economic Research.||en_US|