Please use this identifier to cite or link to this item: http://dx.doi.org/10.23668/psycharchives.4813
Title: Transparency and Reporting Practices in Reliability Generalization Meta-analyses: Assessment with the REGEMA checklist
Authors: Sánchez-Meca, Julio
Marín-Martínez, Fulgencio
Núñez-Núñez, Rosa María
Rubio-Aparicio, María
López-López, José Antonio
Blázquez-Rincón, Desirée María
López-Ibáñez, Carmen
López-Nicolás, Rubén
López-Pina, José Antonio
López-García, Juan José
Issue Date: 21-May-2021
Publisher: ZPID (Leibniz Institute for Psychology)
Abstract: 1. Background A reliability generalization (RG) meta-analysis is a special type of psychometric meta-analysis that aims to integrate reliability coefficients obtained when a given test is applied to different primary studies, to examine how reliability of a test scores varies from an application to the next. Its purpose is to estimate the average reliability of a test scores, to investigate whether reliability can be generalized to different contexts, situations, and target populations, and in case of heterogeneity, to identify study characteristics that might be statistically associated to reliability coefficients (e.g., mean and SD of tests cores, target population, test version, etc.). Current checklists to guide the reporting of meta-analyses are not adequate for RG meta-analyses: PRISMA checklist (Preferred Reporting Items for Systematic Reviews and Meta-Analyses; Moher et al., 2009) AMSTAR checklist for reporting meta-analyses on intervention efficacy (Grimshaw, Wells, et al., 2007) MOOSE checklist for reporting meta-analyses of observational studies (Stroup, Berlin, Morton et al., 2000) MARS guidelines for reporting meta-analyses on intervention efficacy (APA Publications and Communications Board Working Group on Journal Article Reporting Standards, 2008) Due to the absence in the literature of guidelines specifically devised to inform researchers and consumers of RG meta-analyses on how to conduct and report an RG meta-analysis, we developed the REGEMA checklist, a tool to guide researchers to report RG meta-analyses fulfilling the transparency and reproducibility principles of the Open Science. 1.1 Objectives The main objectives of this research were: (a) To examine the inter-coder reliability of the REGEMA checklist; (b) to investigate the capacity of REGEMA checklist to assess the reporting quality of RG meta-analyses, and (c) to assess the degree of compliance of REGEMA checklist by RG meta-analyses. 2. Method 2.1 Selection criteria of the studies To be included in this systematic review the meta-analyses had to fulfil the following selection criteria: (a) RG meta-analysis reporting reliability estimates of one or several tests to assess psychological constructs; (b) it had to be carried out between 1998 and 2019; (c) it had to include all primary studies that applied the test/s of interest (meta-analyses of only psychometric studies were excluded); (d) in case of focusing on more than one test, it had to report separate reliability estimates of each one of them, and (e) to be written in English or Spanish. 2.2 Searching for the studies The following databases were consulted: PsycINFO, Web of Science, and Google Scholar, using the key words in the title: ‘reliability generalization’, ‘meta-analysis AND reliability’, ‘meta-analysis AND internal consistency’, ‘meta-analysis AND test-retest’, ‘meta-analysis AND (interrater OR intrarater)’, ‘meta-analysis AND (alpha coefficient OR coefficient alpha)’, ‘meta-analysis AND intraclass’. A total of 150 RG meta-analyses (19 unpublished and 131 published RG meta-analyses) were identified and included in this systematic review. 2.3 Data extraction REGEMA checklist was applied to each RG meta-analysis. Other characteristics registered were the publication source, the year of the meta-analysis, and the idiom. Inter-coder reliability was assessed with two coders that independently extracted the data from the RG meta-analyses and applied the REGEMA checklist. Inconsistencies were resolved by consensus. 2.4 Statistical analysis To estimate the inter-coder reliability of REGEMA checklist, Cohen’s kappa coefficient and inter-coder agreement percentage were calculated for each item and subitem of the checklist. To assess the degree of compliance of each item of the REGEMA checklist, the compliance percentage from all the RG meta-analyses analyzed was calculated. 3. Results REGEMA checklist exhibited a very satisfactory degree of inter-coder reliability. The degree of compliance of REGEMA by RG meta-analyses is deficient in many of their items (< 50% of compliance), especially in: (a) Background: defining the psychological construct assessed; (b) determining the selection criteria of the studies; (c) the reliability analysis of data extraction data from the studies, and (d) how heterogeneity among reliability coefficients was assessed. 4. Conclusions and implications REGEMA checklist is easy to be applied and enables to assess the reporting quality of RG meta-analyses. REGEMA checklist can be useful for: (a) Researchers interested in conducting an RG meta-analysis; (b) Potential readers of RG meta-analyses, as a tool to make a critical reading of them, and (c) Editors of scientific journals that publish RG meta-analyses, as a guide for an adequate reporting of this type of meta-analyses. * Funding This research was funded by a grant from the Ministerio de Ciencia e Innovación of the Spanish Government and by FEDER funds (Project nº PID2019-104080GB-I00).
URI: https://hdl.handle.net/20.500.12034/4250
http://dx.doi.org/10.23668/psycharchives.4813
Citation: Sánchez-Meca, J., Marín-Martínez, F., Núñez-Núñez, R. M., Rubio-Aparicio, M., López-López, J. A., Blázquez-Rincón, D. M., López-Ibáñez, C., López-Nicolás, R., López-Pina, J. A., & López-García, J. J. (2021). Transparency and Reporting Practices in Reliability Generalization Meta-analyses: Assessment with the REGEMA checklist. ZPID (Leibniz Institute for Psychology). https://doi.org/10.23668/PSYCHARCHIVES.4813
Appears in Collections:Conference Object