Supplementary material from "Analytic reproducibility in articles receiving open data badges at psychological science: an observational study" release_lkprlnlzuvcsfmfac25hnl2no4

by Tom E Hardwicke, Manuel Bohn, Kyle MacDonald, Emily Hembacher, Michèle B. Nuijten, Benjamin N. Peloquin, Benjamin E. DeMayo, Bria Long, Erica Yoon, Michael C. Frank

Published in figshare.com by The Royal Society.

2020  

Abstract

For any scientific report, repeating the original analyses upon the original data should yield the original outcomes. We evaluated analytic reproducibility in 25 <i>Psychological Science</i> articles awarded open data badges between 2014 and 2015. Initially, 16 (64%, 95% confidence interval [43,81]) articles contained at least one 'major numerical discrepancy' (&gt;10% difference) prompting us to request input from original authors. Ultimately, target values were reproducible without author involvement for 9 (36% [20,59]) articles; reproducible with author involvement for 6 (24% [8,47]) articles; not fully reproducible with no substantive author response for 3 (12% [0,35]) articles; and not fully reproducible despite author involvement for 7 (28% [12,51]) articles. Overall, 37 major numerical discrepancies remained out of 789 checked values (5% [3,6]), but original conclusions did not appear affected. Non-reproducibility was primarily caused by unclear reporting of analytic procedures. These results highlight that open data alone is not sufficient to ensure analytic reproducibility.
In text/plain format

Archived Content

There are no accessible files associated with this release. You could check other releases for this work for an accessible version.

Not Preserved
Save Paper Now!

Know of a fulltext copy of on the public web? Submit a URL and we will archive it

Type  stub
Stage   published
Date   2020-12-23
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 627a1a2a-ae0b-4776-8835-2ed454f8bee2
API URL: JSON