未加星标

Repeatability in Practice (2018 version)

字体大小 | |
[开发(python) 所属分类 开发(python) | 发布者 店小二05 | 时间 2018 | 作者 红领巾 ] 0人收藏点击收藏
Note: These are the notes for a talk I gave in the Crutchfield Lab at UC Davis, by kind invitation from Ryan G. James. You can view the original hackmd in view mode or slide mode if you are interested!* The basic idea Goals of repeatability

In order of importance,

Make it as easy as possible for us to rerun everything from any point in the workflow. Make it as easy as possible to try new things. Make it easy to add / adjust analyses in response to reviewers. Make it 100% repeatable. Some diagrams

Tools:


Repeatability in Practice (2018 version)

Fitting the tools together:


Repeatability in Practice (2018 version)
What many of us have converged on in the DIB Lab

git/GitHub + snakemake + (Jupyter Notebook or R) + conda

With this collection of tools, we can generally achieve full repeatability.

Some points :

we use a diverse array of computing environments: Mac vs linux, HPC vs cloud (AWS/GCP/Jetstream) vs docker vs binder (dynamically configured docker containers). This approach works across all of them more or less seamlessly. Software installation was THE last remaining major pain point for us until bioconda came along. This solves almost all of the problems. And, before that, my workflow tool was Makefiles which no one but me liked - until snakemake, which we've more or less converged on. This stack of software is something that (with minor modifications and sometimes major extensions) most data scientists I talk to have converged on. The Carpentries teaches most of this (not snakemake or conda, but conda is an easy extension) We ran a whole ~100 person workshop using (bio)conda with only a few failures! The Journey Overall strategy

For each paper we write, we try to provide a repeatable pipeline on GitHub. Then, for publication, we link the GitHub repo to Zenodo and cut a new version, which gives us a DOI to cite for exactly the version of the code used.

This is the strategy we used for the MMETSP paper, "Re-assembly, quality evaluation, and annotation of 678 microbial eukaryotic reference transcriptomes" , which is now accepted at GigaScience (yay!!)

Large raw data files are hosted on specific archival services (e.g. the Sequence Read Archive). Zenodo or figshare or the Open Science Framework serve as nice places to dump intermediate files.

For figure generation, we tend to use Jupyter Notebooks and/or R scripts that are in the github repo. Where possible we put the summary files used to build the figures in the github repo; if they're bigger than 100MB, we put them on OSF; if they're bigger than 5 GB, we start looking at zenodo.

Note: to the best of our knowledge, no one has ever bothered to reproduce our papers from scratch. But we feel good about it anyway :).

The Bad Old Days

For Brown and Callan, 2004 , I used a pile of scripts - shell scripts to execute python scripts to do the heavy lifting. It worked well for the time, but was virtually impossible for anyone else to reproduce unless they were able to compile a specific Python package with C extensions (which at the time was extra hard).

Cool beans

Of note, this approach let me determine that a bug in the code didn't significantly affect the published results (I found the bug much later, when I was reusing the pipeline elsewhere).

The Days of Make

For many years, I switched to using make. You can see a old-style repo here, 2016-metagenome-assembly-eval ( see preprint ). The repo contains the LaTeX paper source, some notebooks for graphing, and a pipeline directory with a big-arse Makefile in it.

I'm actually updating this repo to resubmit in response to reviewers, and we're now using conda to manage software installs.

The brief middle: Docker! Docker! Docker!

When docker came out, I tried it out for a bunch of blog posts - seeweek of khmer. This ended up being used in a preprint, Crossing the streams - see Docker instructions here .

I wasn't happy with this because it turns out that it's virtually impossible to get docker installed on clusters. But it sure did make it easy to run the software across a variety of platforms.

The last year or so

Now we've converged on an approach that uses conda and snakemake; see 2018 spacegraphcats paper

Other things to mention The importance of libraries

We tend to encapsulate our reusable functionality in Python libraries that are well tested, versioned, etc. While this has some flaws, it means that the bits of our code that get reused in multiple (often shared) projects in the lab get progressively better tested. It also makes them available to others. The skillset required is pretty advanced though. (Examples: khmer , screed , sourmash ).

On the flip side, we do not generally apply any kind of formal testing to our per-project scripts. At best, we try to identify small data sets that we can use to run the whole paper-specific pipeline from start to finish.

Binder

With relatively little effort, you can make your figure-generating Jupyter Notebooks / RMarkdown executable by anyone with a single click through mybinder.org - see my example RStudio repo , and try clicking "launch binder".

Lessons from Oslo, 2016

See this repo for some hands-on lessons in Jupyter Notebook, make, and git.

Workflows-as-applications

Inthis blog post, I talk about how we're using pydoit and snakemake to write command line applications . This is a pretty nifty way to tie a pile of code together into soemthing that looks like a pipeline but can take advantage of things like dependency graphs, cluster execution and Kubernetes.

Here are some links:

dammit , a transcriptome annotator based around pydoit. dahak metagenomics pipeline , based around snakemake. spacegraphcats , a fairly simple command line snakemake wrapper around Snakemake. eel pond , an emerging SIMPLE approach to dynamically building up Snakemake dependencies in response to user configuration.

The downsides to this approach are that we now writing papers that have workflows in them that call out to workflows that use wrapped workflows... workflows all the way down!!!

Why snakemake?

Pros:

Python-based, so we don't have to use some hacky weird configuration language; everything can be munged in Python. (This is also a downside - things can be come arbitrarily complicated.) surprisingly simple and clean surprisingly powerful can grow the files incrementally as your needs grow ties seamlessly into workflow engines (clusters, kubernetes), conda, docker & singularity. Like, it actually works .

Cons:

a confusing mixture of generic rules and specific rules. For example, this is a specific rule for a particular output (configured by user for each workflow and then parameterized from that configuration), while this is a general rule that snakemake can parameterize for you. like anything else, can become a gigantic mess. We haven't yet worked out ways of keeping it clean. Fast evolving ecosystem. Using argparse in Python

It's a small thing, but I strongly recommend using proper argument parsing - a main() function, with argparse. See top of this script for an example.

It makes setting default parameters and adding arguments really easy, and gives good error messages too.

本文开发(python)相关术语:python基础教程 python多线程 web开发工程师 软件开发工程师 软件开发流程

分页:12
转载请注明
本文标题:Repeatability in Practice (2018 version)
本站链接:https://www.codesec.net/view/611838.html


1.凡CodeSecTeam转载的文章,均出自其它媒体或其他官网介绍,目的在于传递更多的信息,并不代表本站赞同其观点和其真实性负责;
2.转载的文章仅代表原创作者观点,与本站无关。其原创性以及文中陈述文字和内容未经本站证实,本站对该文以及其中全部或者部分内容、文字的真实性、完整性、及时性,不作出任何保证或承若;
3.如本站转载稿涉及版权等问题,请作者及时联系本站,我们会及时处理。
登录后可拥有收藏文章、关注作者等权限...
技术大类 技术大类 | 开发(python) | 评论(0) | 阅读(12)