Data repositories are expanding their role to ensure quality of reproducible research. http://t.co/A8T7a6p1eU— LSEImpactBlog (@LSEImpactBlog) August 12, 2013
A well-curated data repository is more than a place to put data. Drawing from the experience of the ISPS Data Archive, Limor Peer shows the research community has much to gain from a repository that reviews to ensure quality and reproducibility of the data. Stewardship of data in this context may be more labour intensive but ultimately provides better quality, better science, and better service.
Who is responsible for the quality of data deposited in repositories? And what is quality data, anyway? These questions were on my mind as I was preparing to present a poster at the Open Repositories 2013 conference in Charlottetown, PEI earlier this month. The annual conference brings the digital repositories community together with stakeholders, such as researchers, librarians, publishers and others to address issues pertaining to “the entire lifecycle of information.” The conference theme this year, “Use, Reuse, Reproduce,” could not have been more relevant to the ISPS Data Archive. Two plenary sessions bookended the conference, both discussing the credibility crisis in science. In the opening session, Victoria Stodden set the stage with her talk about the central role of algorithms and code in the reproducibility and credibility of science. In the closing session, Jean-Claude Guédon made a compelling case that open repositories are vital to restoring quality in science.