未加星标

Using Docker for Dev Environment Setup Automation

字体大小 | |
[数据库(综合) 所属分类 数据库(综合) | 发布者 店小二05 | 时间 2017 | 作者 红领巾 ] 0人收藏点击收藏

In this article, I show how Docker can help in setting up a runtime environment, set up a database with predefined data set, get it all together, and run it isolated on your machine.

Let’s start with the goals:

I want to have isolated Java SDK, Scala SDK, and SBT (build tool). I want to be still able to edit and rebuild my project code easily from my IDE. I also need to have a MongoDB instance running locally. Last but not least, I want to have some minimal data set in my MongoDB out of the box. Ah, right, all of the above must be downloaded and configured in a single command.

Sounds impressive, isn’t it? Let’s dive in.

Here is a high-level overview of the containers we need to run to achieve all of these goals:


Using Docker for Dev Environment Setup Automation

First two goals are covered by Dev Container, so let’s start with that one.

Our minimal project structure should look like this:

/myfancyproject /project /module1 /module2 .dockerignore build.sbt Dockerfile

The structure will become more sophisticated as the article progresses, but for now it’s sufficient. Dockerfile is the place where Dev Container’s image is described (if you aren’t yet familiar with images and containers, you should read this first). Let’s look inside:

FROM java:openjdk-8u72-jdk # install SBT RUN apt-get update \ && apt-get install -y apt-transport-https \ && echo "deb https://dl.bintray.com/sbt/debian /" | tee -a /etc/apt/sources.list.d/sbt.list \ && apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 642AC823 \ && apt-get update \ && apt-get -y install sbt ENV SBT_OPTS="-Xmx1200M -Xss512K -XX:MaxMetaspaceSize=512M -XX:MetaspaceSize=300M" # make SBT dependencies an image layer: no need to update them on every container rebuild COPY project/build.properties /setup/project/build.properties RUN cd /setup \ && sbt test \ && cd / \ && rm -r /setup EXPOSE 8080 VOLUME /app WORKDIR /app CMD ["sbt"]

The Official OpenJDK 1.8 image is specified as a base image in the first line of Dockerfile.Note: official repositories are available for many other popular platforms. They are easily recognizable by naming convention: it never contains slashes i.e. it always just a repository name without author name. More about official repos here.

Install SBT and specify environment variable SBT_OPTS.

AnSBT-specific trick (skip it safely if you don’t use SBT) is to speed up containers' starting times.As you may know, nothing is eternal in a container: it can (and normally is) destroyed every now and then. By making some required dependencies a part of the image, we download them just once and in this way significantly speed up containers building time.

Declare port 8080 as the one listened by an application inside the container.We’ll refer it later to access the application.

Declare a new volume under the /app folder and start the next commands from there. We will use it in a moment to make all project files accessible from two worlds: from the host and from the container.

Default command to run SBT interactive mode on container startup.For build tools without interactive mode (like Maven), this can be CMD [“/bin/bash”] .

Now, we can already test the image:

docker build -t myfancyimage . docker run --rm -it -v $(pwd):/app -p 127.0.0.1:9090:8080 myfancyimage

The first command builds an image from our Dockerfile and gives it a name myfancyimage . The second command builds and starts the container from the image. It binds current folder to the container’s volume ( $(pwd):/app ) and binds host port 9090 to container’s exposed port 8080.

Ok, now it’s time to bring in some data. We start with adding Mongo Engine container, and later supply it with sample data snapshot. As we’re about to run multiple containers linked together, it’s convenient to describe how to run these containers via docker-compose configuration file. Let’s add to the project’s root docker-compose.yml with the following content:

version: '2' services: dev-container: build: context: . dockerfile: Dockerfile image: myfancyimage ports: - "127.0.0.1:9090:8080" volumes: - .:/app links: - mongo-engine mongo-engine: image: mongo:3.2 command: --directoryperdb

The commands for building and running myfancyimage are transformed to the dev-container .

Container mongo-engine with MongoDB 3.2 from official DockerHub repository.

Link mongo-engine to dev-container: mongo-engine will start prior to dev-container and they will share a network. MongoDB is available to dev-container by the URL mongodb://mongo-engine/ .

Let’s try it:

docker-compose run --service-ports dev-container

It’s important to add the --service-ports flag to enable configured ports mapping.

All right, here comes the hardest part: sample data distribution. Unfortunately, there’s no suitable mechanism for Docker Data Volumes distribution, although there exist a few Docker volumes managers (i.e., Flocker , Azure Volume driver , etc.), these tools serve other goals.

Note: An alternative solution would be to restore data from DB dump programmatically or even generate it randomly. But this approach is not generic, i.e., it involves specific tools and scripts for each DB, and in general is more complicated.

The data distribution mechanism we’re seeking must support two operations:

Replicate a fairly small dataset from a remote shared repository to local environment. Publish new or modified data set to the remote repository.

One obvious approach is to distribute data via docker images. In this case, a remote repository is the same placewhere we store our Docker images; it can be either DockerHubor a private Docker Registry instance. The solution described below can work with both.

Meeting the 1st requirement is easy: we need to run a container from a data image, mark data folder as a volume, and link that volume (via the --volumes-from argument) to Mongo Engine container.

The 2nd requirement is complicated. After doing some changes inside the volume we cannot simply commit those changes back to docker image: volume is technically not a part of the modifiable top layer of a container. In simpler words, Docker daemon just doesn’t see any changes to commit.

Here’s the trick: If we can read changed data but cannot commit it from the volume then we need to copy it first elsewhere ou

本文数据库(综合)相关术语:系统安全软件

主题: DockerMongoDBScalaJavaOPT
分页:12
转载请注明
本文标题:Using Docker for Dev Environment Setup Automation
本站链接:http://www.codesec.net/view/522396.html
分享请点击:


1.凡CodeSecTeam转载的文章,均出自其它媒体或其他官网介绍,目的在于传递更多的信息,并不代表本站赞同其观点和其真实性负责;
2.转载的文章仅代表原创作者观点,与本站无关。其原创性以及文中陈述文字和内容未经本站证实,本站对该文以及其中全部或者部分内容、文字的真实性、完整性、及时性,不作出任何保证或承若;
3.如本站转载稿涉及版权等问题,请作者及时联系本站,我们会及时处理。
登录后可拥有收藏文章、关注作者等权限...
技术大类 技术大类 | 数据库(综合) | 评论(0) | 阅读(31)