When Research Is A Black Box, No One Wins
Reproducible research (RR) provides a road map through a study. In other words, conducting RR means engaging in a transparent analytic process. This prospect can be both exciting and terrifying.
Many of us harbor a burning fear that we will be "found out" as someone trying to "fake it until we make it." No matter how many goals we reach or degrees we earn, a hint of doubt remains. This doubt can keep us humble, vigilant, and hungry for professional growth. For some, a potent mix of doubt and fear is fuel for ambition. Unfortunately, this same combination can also make RR a scary concept.
The RR movement champions accuracy and clarity over "progress" and polish. Unfortunately, for too long much of the research world prioritized the latter over the former. According to a 2013 Economist article, replication studies show that as few as 11 to 25% of selected published biomedical findings held-up when re-tested. Similar doubts were cast on psychological priming research. Though explanations for these findings may be many, only uncertainty and doubt are created when science is a black box.
We must learn from these revelations. Though intimidating, RR is a critical step toward restoring confidence in the research community. As colleagues, we must reward transparency with constructive feedback and support. As researchers, we must develop thick skin when feedback is critical. For RR to gain momentum, we must find a delicate balance between constructive feedback and mutual respect for the scientific process. That process is rooted in replication and falsification. Researchers must be free to champion transparency without fear of ridicule or embarrassment. After all, every scientific discovery stands on the shoulders of many disproven theories.
Strategies & Practices
There are many excellent sources of information available. For those looking to educate themselves on this topic, I will post some useful resources below. Promoting RR can be as simple as making your analyses' syntax (i.e. code) publicly available (e.g. in a linked footnote). When possible, making your data available along with your code verifies both your process and results. There are many ways one can promote RR, but the most important step is to educate yourself. Make it a priority to learn what RR is, how it works, and how you can apply it to your own work. In addition to some valuable resources, I've attempted to explain a few key terms below (some used in this blog and some not). In the open-source spirit, please share any other RR terms you'd like discussed. As my work is iterative and undoubtedly flawed, please also let me know if I've gone astray with my explanations below!
Key RR Terms:
Reproducible Research - research that is conducted with systematic procedures and documentation, such that it can be reproduced either by the original researcher or by an independent party, for the purposes of verifying the study's results and/or conclusions. According to CRAN, the goal of reproducible research "is to tie specific instructions to data analysis and experimental data so that scholarship can be recreated, better understood and verified."
Minimum-Threshold Journals - journals that seek to publish as much quality science as possible, regardless of "statistical significance." These journals' inclusion criteria is focused on a studies' methodological rigor, instead of whether results are statistically significant. Despite their inclusive intentions, up to 50% of submitted articles are rejected because of poor methodological rigor, so perhaps their threshold is more accurately characterized as "re-calibrated" rather than "minimum." PLoS ONE is a prominent example of a "minimum-threshold journal."
Literate Programming - a method of programming (often statistical analysis) that combines typical programming code, graphics, and natural language into a document that is more easily understood by individuals other than the programmer. The product of literate programming is often an html page that presents an analytic process through a combination of natural language narrative, programming code snippets, and table or graphical output of analysis results. Well produced literate programming allows a reader to completely reproduce an analysis as they read along, using embedded links to download data and code snippets to produce the results presented in the document.
CRAN Task View: Reproducible Research. CRAN R-Project, 2014. Web. 20 Jan. 2014. http://cran.r-project.org/web/views/ReproducibleResearch.html
Knuth, Donald E. "Literate programming." The Computer Journal, 27.2 (1984): 97-111.
Trouble at the lab. The Economist, 2013. Web. 19, Oct. 2013. http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble
Yihui Xie (2013). knitr: A general-purpose package for dynamic report generation in R. R package version 1.5. http://cran.r-project.org/web/packages/knitr/index.html
Reproducible Research Resources & Links
Reproducible Research.net - a repository of RR resources and links. See more at: http://reproducibleresearch.net/index.php/Main_Page
The Comprehensive R Archive Network (CRAN) RR Page - resources for RR with R, which is an open-source statistical programming software platform. See more at: http://cran.r-project.org/web/views/ReproducibleResearch.html
Johns Hopkins Free Online RR Course (with video) - Learn the concepts and tools behind reporting modern data analyses in a reproducible manner. This is the fifth course in the Johns Hopkins Data Science Specialization. See more at: https://www.coursera.org/course/repdata
Johns Hopkins Free Online Courses (with video) on Data Science - A set of 9 free online courses in data science. Topics include: 1) The Data Scientist’s Toolbox - Get yourself set up, 2) R programming - Learn to code, 3) Getting and Cleaning Data - You need data. Get some. 4) Exploratory Data Analysis - What’s that in my data? 5) Reproducible Research - Did you do what you think you did? 6) Statistical Inference - You don’t have infinite money. Try sampling. 7) Regression Models - The duct tape of data science. 8) Practical Machine Learning - Predict the future with data. Easy. 9) Developing Data Products - There better be an app for that data.
See more at: http://jhudatascience.org
knitr: Elegant, flexible and fast dynamic report generation with R - The knitr package was designed to be a transparent engine for dynamic report generation with R, solve some long-standing problems in Sweave, and combine features in other add-on packages into one package (knitr ≈ Sweave + cacheSweave + pgfSweave + weaver + animation::saveLatex + R2HTML::RweaveHTML + highlight::HighlightWeaveLatex + 0.2 * brew + 0.1 * SweaveListingUtils + more). See more at: http://yihui.name/knitr/
The Sweave Homepage - Sweave is a tool that allows to embed the R code for complete data analyses in latex documents. The purpose is to create dynamic reports, which can be updated automatically if data or analysis change. Instead of inserting a prefabricated graph or table into the report, the master document contains the R code necessary to obtain it. When run through R, all data analysis output (tables, graphs, etc.) is created on the fly and inserted into a final latex document. The report can be automatically updated if data or analysis change, which allows for truly reproducible research. See more at: http://www.stat.uni-muenchen.de/~leisch/Sweave/
Science Exchange - is a marketplace for scientific collaboration, where researchers can order experiments from the world's best labs and/or have their own research replicated. See more at: https://www.scienceexchange.com