Analytic reproducibility in articles receiving open data badges at the journal Psychological Science: An observational study

Publikation: Beiträge in ZeitschriftenZeitschriftenaufsätzeForschungbegutachtet

Standard

Analytic reproducibility in articles receiving open data badges at the journal Psychological Science: An observational study. / Hardwicke, Tom E.; Bohn, Manuel; MacDonald, Kyle et al.
in: Royal Society Open Science, Jahrgang 8, Nr. 1, 201494, 06.01.2021.

Publikation: Beiträge in ZeitschriftenZeitschriftenaufsätzeForschungbegutachtet

Harvard

Hardwicke, TE, Bohn, M, MacDonald, K, Hembacher, E, Nuijten, MB, Peloquin, BN, Demayo, BE, Long, B, Yoon, EJ & Frank, MC 2021, 'Analytic reproducibility in articles receiving open data badges at the journal Psychological Science: An observational study', Royal Society Open Science, Jg. 8, Nr. 1, 201494. https://doi.org/10.1098/rsos.201494

APA

Hardwicke, T. E., Bohn, M., MacDonald, K., Hembacher, E., Nuijten, M. B., Peloquin, B. N., Demayo, B. E., Long, B., Yoon, E. J., & Frank, M. C. (2021). Analytic reproducibility in articles receiving open data badges at the journal Psychological Science: An observational study. Royal Society Open Science, 8(1), Artikel 201494. https://doi.org/10.1098/rsos.201494

Vancouver

Hardwicke TE, Bohn M, MacDonald K, Hembacher E, Nuijten MB, Peloquin BN et al. Analytic reproducibility in articles receiving open data badges at the journal Psychological Science: An observational study. Royal Society Open Science. 2021 Jan 6;8(1):201494. doi: 10.1098/rsos.201494

Bibtex

@article{ecf1b3117b4e4967be7c1f7af45f4327,
title = "Analytic reproducibility in articles receiving open data badges at the journal Psychological Science: An observational study",
abstract = "For any scientific report, repeating the original analyses upon the original data should yield the original outcomes. We evaluated analytic reproducibility in 25 Psychological Science articles awarded open data badges between 2014 and 2015. Initially, 16 (64%, 95% confidence interval [43,81]) articles contained at least one 'major numerical discrepancy' (>10% difference) prompting us to request input from original authors. Ultimately, target values were reproducible without author involvement for 9 (36% [20,59]) articles; reproducible with author involvement for 6 (24% [8,47]) articles; not fully reproducible with no substantive author response for 3 (12% [0,35]) articles; and not fully reproducible despite author involvement for 7 (28% [12,51]) articles. Overall, 37 major numerical discrepancies remained out of 789 checked values (5% [3,6]), but original conclusions did not appear affected. Non-reproducibility was primarily caused by unclear reporting of analytic procedures. These results highlight that open data alone is not sufficient to ensure analytic reproducibility.",
keywords = "journal policy, meta-research, open badges, open data, open science, reproducibility, Psychology",
author = "Hardwicke, {Tom E.} and Manuel Bohn and Kyle MacDonald and Emily Hembacher and Nuijten, {Mich{\`e}le B.} and Peloquin, {Benjamin N.} and Demayo, {Benjamin E.} and Bria Long and Yoon, {Erica J.} and Frank, {Michael C.}",
note = "Funding Information: Data accessibility. All data, materials and analysis scripts related to this study are publicly available on the Open Science Framework (https://osf.io/n3dej/). To facilitate reproducibility, this manuscript was written by interleaving regular prose and analysis code using knitr [28] and papaja [29], and is available in a Code Ocean container (https://doi.org/ 10.24433/CO.1796004.v3) which re-creates the software environment in which the original analyses were performed. Authors{\textquoteright} contributions. T.E.H. and M.C.F. designed the study. T.E.H., M.B., K.M., E.H., M.B.N., B.N.P., B.E.d.M., B.L., E.J.Y. and M.C.F. conducted the reproducibility checks. T.E.H. performed the data analysis. T.E.H. and M.C.F. wrote the manuscript. M.B. and M.B.N. provided feedback on the manuscript. All authors gave final approval for publication. Competing interests. We declare we have no competing interests. Funding. T.E.H.{\textquoteright}s contribution was enabled by a general support grant awarded to the Meta-Research Innovation Center at Stanford (METRICS) from the Laura and John Arnold Foundation and a grant from the Einstein Foundation and Stiftung Charit{\'e} awarded to the Meta-Research Innovation Center Berlin (METRIC-B). Acknowledgements. We are grateful to the authors of the original articles for their assistance with the reproducibility checks. We thank students from Stanford{\textquoteright}s Psych 251 class, who contributed to the initial reproducibility checks. Publisher Copyright: {\textcopyright} 2021 The Authors.",
year = "2021",
month = jan,
day = "6",
doi = "10.1098/rsos.201494",
language = "English",
volume = "8",
journal = "Royal Society Open Science",
issn = "2054-5703",
publisher = "The Royal Society",
number = "1",

}

RIS

TY - JOUR

T1 - Analytic reproducibility in articles receiving open data badges at the journal Psychological Science

T2 - An observational study

AU - Hardwicke, Tom E.

AU - Bohn, Manuel

AU - MacDonald, Kyle

AU - Hembacher, Emily

AU - Nuijten, Michèle B.

AU - Peloquin, Benjamin N.

AU - Demayo, Benjamin E.

AU - Long, Bria

AU - Yoon, Erica J.

AU - Frank, Michael C.

N1 - Funding Information: Data accessibility. All data, materials and analysis scripts related to this study are publicly available on the Open Science Framework (https://osf.io/n3dej/). To facilitate reproducibility, this manuscript was written by interleaving regular prose and analysis code using knitr [28] and papaja [29], and is available in a Code Ocean container (https://doi.org/ 10.24433/CO.1796004.v3) which re-creates the software environment in which the original analyses were performed. Authors’ contributions. T.E.H. and M.C.F. designed the study. T.E.H., M.B., K.M., E.H., M.B.N., B.N.P., B.E.d.M., B.L., E.J.Y. and M.C.F. conducted the reproducibility checks. T.E.H. performed the data analysis. T.E.H. and M.C.F. wrote the manuscript. M.B. and M.B.N. provided feedback on the manuscript. All authors gave final approval for publication. Competing interests. We declare we have no competing interests. Funding. T.E.H.’s contribution was enabled by a general support grant awarded to the Meta-Research Innovation Center at Stanford (METRICS) from the Laura and John Arnold Foundation and a grant from the Einstein Foundation and Stiftung Charité awarded to the Meta-Research Innovation Center Berlin (METRIC-B). Acknowledgements. We are grateful to the authors of the original articles for their assistance with the reproducibility checks. We thank students from Stanford’s Psych 251 class, who contributed to the initial reproducibility checks. Publisher Copyright: © 2021 The Authors.

PY - 2021/1/6

Y1 - 2021/1/6

N2 - For any scientific report, repeating the original analyses upon the original data should yield the original outcomes. We evaluated analytic reproducibility in 25 Psychological Science articles awarded open data badges between 2014 and 2015. Initially, 16 (64%, 95% confidence interval [43,81]) articles contained at least one 'major numerical discrepancy' (>10% difference) prompting us to request input from original authors. Ultimately, target values were reproducible without author involvement for 9 (36% [20,59]) articles; reproducible with author involvement for 6 (24% [8,47]) articles; not fully reproducible with no substantive author response for 3 (12% [0,35]) articles; and not fully reproducible despite author involvement for 7 (28% [12,51]) articles. Overall, 37 major numerical discrepancies remained out of 789 checked values (5% [3,6]), but original conclusions did not appear affected. Non-reproducibility was primarily caused by unclear reporting of analytic procedures. These results highlight that open data alone is not sufficient to ensure analytic reproducibility.

AB - For any scientific report, repeating the original analyses upon the original data should yield the original outcomes. We evaluated analytic reproducibility in 25 Psychological Science articles awarded open data badges between 2014 and 2015. Initially, 16 (64%, 95% confidence interval [43,81]) articles contained at least one 'major numerical discrepancy' (>10% difference) prompting us to request input from original authors. Ultimately, target values were reproducible without author involvement for 9 (36% [20,59]) articles; reproducible with author involvement for 6 (24% [8,47]) articles; not fully reproducible with no substantive author response for 3 (12% [0,35]) articles; and not fully reproducible despite author involvement for 7 (28% [12,51]) articles. Overall, 37 major numerical discrepancies remained out of 789 checked values (5% [3,6]), but original conclusions did not appear affected. Non-reproducibility was primarily caused by unclear reporting of analytic procedures. These results highlight that open data alone is not sufficient to ensure analytic reproducibility.

KW - journal policy

KW - meta-research

KW - open badges

KW - open data

KW - open science

KW - reproducibility

KW - Psychology

UR - http://www.scopus.com/inward/record.url?scp=85100950783&partnerID=8YFLogxK

U2 - 10.1098/rsos.201494

DO - 10.1098/rsos.201494

M3 - Journal articles

C2 - 33614084

AN - SCOPUS:85100950783

VL - 8

JO - Royal Society Open Science

JF - Royal Society Open Science

SN - 2054-5703

IS - 1

M1 - 201494

ER -

Dokumente

DOI