
Dr. Domenico Giusti
Paläoanthropologie, Senckenberg Centre for Human Evolution and Palaeoenvironment
This image was created by Scriberia for The Turing Way community and is used under a CC-BY licence
A problem with any one of these three types of reproducibility [...] can be enough to derail the process of establishing scientific facts. Each type calls for different remedies, from improving existing communication standards and reporting (empirical reproducibility) to making computational environments available for replication purposes (computational reproducibility) to the statistical assessment of repeated results for validation purposes (statistical reproducibility).
Stodden 2014 [Online; accessed 20 May 2021]
An article (about computational result) is advertising, not scholarship. The actual scholarship is the full software environment, code and data, that produced the result.
Buckheit & Donoho 1995
This image was created by Scriberia for The Turing Way community and is used under a CC-BY licence
-> F. Markowetz - 5 selfish reasons to work reproducibly [Online; accessed 20 May 2021]
How computers broke science – and what we can do to fix it [Online; accessed 20 May 2021]
"Overall the report introduces the concept of reproducibility as a continuum of practices".
"It is posited that the reproducibility of results has value both as a mechanism to ensure good science based on truthful claims, and as a driver of further discovery and innovation." Reproducibility of scientific results in the EU
A likely culprit for this disconnect [embracing, but not practicing open science principles] is an academic reward system that does not sufficiently incentivize open practices.
Nosek et al. 2015
"Journal policies can be evaluated based on the degree to which they comply with the TOP Guidelines. This TOP Factor is a metric that reports the steps that a journal is taking to implement open science practices, practices that are based on the core principles of the scientific community. It is an alternative way to assess journal qualities, and is an improvement over traditional metrics that measure mean citation rates. The TOP Factor is transparent [...] and will be responsive to community feedback." TOP Guidelines
Read more about TOP Factor here.
This image was created by Scriberia for The Turing Way community and is used under a CC-BY licence
This image was created by Scriberia for The Turing Way community and is used under a CC-BY licence
1. Plan for reproducibility before you start
2. Keep track of things
3. Share and license your research
4. Report your research transparently
This image was created by Scriberia for The Turing Way community and is used under a CC-BY licence
Everything is in the paper; anyone can reproduce this from there!
This is one of the most common misconceptions. Even having an extremely detailed description of the methods and workflows employed to reach the final result will not be sufficient in most cases to reproduce it. This can be due to several aspects, including different computational environments, differences in the software versions, implicit biases that were not clearly stated, etc.
I don’t have the time to learn and establish a reproducible workflow.
In addition to a significant number of freely available online services that can be combined and facilitate the setting up of an entire workflow, spending the time and effort to put this together will increase both the scientific validity of the final results as well as minimize the time of re-running or extending it in further studies.
The basic building blocks of archaeological knowledge are non-replicable observations, this does not mean we are immune to the reproducibility crisis.
But archaeological research is not just a list of sites and artefacts. In order to extract understanding from our irreproducible corpus of material, we subject it to an extraordinary range of analytical methods, most of which are replicable. This is where archaeologists have been most active in promoting reproducibility, as part of a larger trend towards open science.
We can’t rerun the history that produced that material, or even the process through which we obtained it. So how can we obtain reproducible results from non-replicable observations?
J. Roe. 2016. Does archaeology have a reproducibility crisis? [Online; accessed 20 May 2021]
-> Consider how the concepts of empirical and computational reproducibility apply to Archaeology.