Using Docker Linux containers (containers, images, machine, compose, swarm, weave)

Docker/docker@wiki is a platform for developers and sysadmins to develop, ship, and run applications.

It uses resource isolation features of the Linux kernel such as cgroups and kernel namespaces to allow independent “containers” to run within a single Linux instance, avoiding the overhead of starting virtual machines. It includes the libcontainer library as its own way to directly use virtualization facilities provided by the Linux kernel, in addition to using abstracted virtualization interfaces via libvirt, LXC (Linux Containers) and systemd-nspawn.

## install (http://docs.docker.com/installation/) either from repos
$ sudo yum install docker (el7) | docker-io (el6 using EPEL) | sudo apt-get install docker.io (deb/ubuntu)
# or binary
$ wget https://get.docker.com/builds/Linux/x86_64/docker-latest -O docker
$ chmod +x docker ; sudo ./docker -d &

$ docker version
Client version: 1.5.0
...
  1. Dockerizing
  2. Containers
  3. Images
  4. Linking/Networking
  5. Volumes
  6. Machine
  7. Compose
  8. Swarm

Dockerizing

Docker allows you to run applications inside containers. Running an application inside a container takes a single command: docker run.

$ sudo docker run ubuntu /bin/echo 'Hello world'
Hello world

# interactive container
$ sudo docker run -t -i ubuntu /bin/bash
root@af8bae53bdd3:/#

from Dockerizing Application

Containers

# list containers
'-a,--all=false' show all containers, including stopped
$ docker ps -a

# run/create container from image
'-i,--interactive=false' keep STDIN open even if not attached
'-d,--detach=false' detached mode, run in the background
'-t,--tty=false' allocate a pseudo-TTY
$ docker run -i -t IMAGE CMD

# attach to running container
$ docker ... -d
$ docker attach CONTAINER

# shows container stdout
$ docker logs -f CONTAINER

# stop/start/kill container
$ docker stop/start/kill CONTAINER

# remove containers
$ docker rm CONTAINER
# stop and delete all containers
$ docker kill $(docker ps -a -q) ; docker rm $(docker ps -a -q)

# run command in running container
$ docker exec -d CONTAINER CMD

from Working with Containers

  • Restart/Autostart
# restart
--restart="no|on-failure|always" restart policy to apply when a container exits
$ sudo docker run --restart=always redis

# autostart with systemd
$ cat {/etc,/run,/usr/lib}/systemd/system/redis.service
[Unit]
Description=Redis container
Author=Me
After=docker.service
[Service]
Restart=always
ExecStart=/usr/bin/docker start -a redis_server
ExecStop=/usr/bin/docker stop -t 2 redis_server
[Install]
WantedBy=local.target
  • Resource constraints
# when using systemd, every process will be placed in a cgroups tree, see http://www.freedesktop.org/software/systemd/man/systemd-cgls.html
$ systemd-cgls

# top/meminfo arent cgroups aware use systemd-cgtop instead, see http://www.freedesktop.org/software/systemd/man/systemd-cgtop.html
$ systemd-cgtop
Path Tasks %CPU Memory Input/s Output/s
...

'-c,--cpu-shares=0' CPU shares (relative weight)
'--cpuset=' CPUs in which to allow execution (0-3, 0,1)
$ docker run -it --rm -c 512 stress --cpu 4
$ docker run -it --rm --cpuset=0,1 stress --cpu 2

# change share in running container, see man systemd.resource-control
$ systemctl show docker-$FULL_CONTAINER_ID.scope
$ sudo systemctl set-property docker-$FULL_CONTAINER_ID.scope CPUShares=512
$ ls /sys/fs/cgroup/cpu/system.slice/docker-$FULL_CONTAINER_ID.scope/

'-m,--memory=' memory limit (memory+swap)
$ docker run -it --rm -m 128m fedora bash
$ ls /sys/fs/cgroup/memory/system.slice/docker-$FULL_CONTAINER_ID.scope/

# limiting write speed: 1) get container filesystem id using mount or nsenter, 2) change BlockIOWriteBandwidth
$ mount
CONTAINER_FS_ID=/dev/mapper/docker-253:0-3408580-d2115072c442b0453b3df3b16e8366ac9fd3defd4cecd182317a6f195dab3b88
$ sudo systemctl set-property --runtime docker-$FULL_CONTAINER_ID.scope "BlockIOWriteBandwidth=$CONTAINER_FS_ID 10M"
# use BlockIOReadBandwidth to limit read speed
# use '--storage-opt' to limit, defaults to 10G
$ docker -d --storage-opt dm.basesize=5G
$ cat /sys/fs/cgroup/blkio/system.slice/docker-$FULL_CONTAINER_ID.scope/

from runtime constraints on cpu and memory and systemd resources

Images

Read-only template out of which docker containers are instantiated, can either be defined by Dockerfiles, commiting a container or downloading from a Docker Hub Registry.

# list images
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu 13.10 5e019ab7bf6d 4 weeks ago 180 MB
...

# search for images
$ docker search centos

# download new image
$ docker pull centos

# remove images
$ docker rmi IMAGE
# delete all ‘untagged/dangling’ images
$ docker rmi $(docker images -q -f dangling=true)

# create new image from a container's changes
$ docker commit CONTAINER NEW-IMAGE-NAME

# tag an image
$ docker tag IMAGE USR/IMAGE:TAG

# push image to hub (needs write permissions)
$ docker push USER/IMAGE:TAG

from Working with Docker Images

# building an image from a Dockerfile
$ cat Dockerfile
FROM ubuntu:14.04
MAINTAINER Kate Smith <ksmith@example.com>
RUN apt-get update && apt-get install -y ruby ruby-dev
RUN gem install sinatra
$ docker build -t USER/IMAGE:TAG
# create a full image using tar
$ sudo debootstrap raring raring > /dev/null
$ sudo tar -C raring -c . | sudo docker import - raring
$ sudo docker run raring cat /etc/lsb-release

# creating a simple base image using scratch
$ tar cv --files-from /dev/null | docker import - scratch
or
$ cat Dockerfile
FROM scratch
...

Linking/Networking

'-p,--publish=[]' publish a container's port to the host

$ sudo docker run -d -p 5000:5000 IMAGE COMMAND
# bind to ip address
... -p 127.0.0.1:5000:5000
# dynamic port
... -p 127.0.0.1::5000
# also udp
... -p 127.0.0.1:5000:5000/udp
'--link=[]' add link to another container in the form of 'name:alias'

$ sudo docker run -d --name db training/postgres
$ sudo docker run --rm --name web --link db:db training/webapp env
DB_NAME=/web/db
DB_PORT=tcp://172.17.0.5:5432
DB_PORT_5432_TCP=tcp://172.17.0.5:5432
DB_PORT_5432_TCP_PROTO=tcp
DB_PORT_5432_TCP_PORT=5432
DB_PORT_5432_TCP_ADDR=172.17.0.5
# use env vars to configure app to connect to db container, connection will be secure and private; only the linked web container will be able to talk to the db container
# also adds a host entry for the source container to the /etc/hosts file. Here's an entry for the web container
$ sudo docker run -t -i --rm --link db:db training/webapp cat /etc/hosts
172.17.0.7 aed84ee21bde
. . .
172.17.0.5 db
  • Networking
    When docker starts it creates a bridge docker0 network virtual interface with a random private address, and automatically forwards packets between any other network attached to it. This lets containers communicate with each other and with the host. Every time Docker creates a container, it creates a pair of “peer” interfaces that are like opposite ends of a pipe — a packet sent on one will be received on the other.
## communicate between container and world:
#1 enable IP forward: start docker server with '--ip-forward=true' (this adds 'echo 1 > /proc/sys/net/ipv4/ip_forward')
#2 iptables must allow: start docker server with '--iptables=true' (default) to append forwarding rules to DOCKER filter chain

## communication between containers
#1 by default docker will attach all containers to 'docker0' bridge interface
#2 iptables must allow: start docker server with '--iptables=true --icc=true' (default) to add rule to FORWARD chain with blanket ACCEPT policy

# when set '--icc=false' docker adds DROP rule and we should use '--link=CONTAINER_NAME_or_ID:ALIAS' to communicate (and you will see a ACCEPTS rule overriding DROP policy)
$ sudo iptables -L -n
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0
DROP       all  --  0.0.0.0/0            0.0.0.0/0
Chain DOCKER (1 references)
target     prot opt source               destination
ACCEPT     tcp  --  172.17.0.2           172.17.0.3           tcp spt:80
ACCEPT     tcp  --  172.17.0.3           172.17.0.2           tcp dpt:80

## binding container ports to the host
# by default outside world cannot connect to containers, either
'-P,--publish-all=true' to EXPOSE all ports, or
'-p,--publish=SPEC' to explicit which ports to expose, e.g.: using '-p 80:80'
$ iptables -t nat -L -n
Chain DOCKER (2 references)
target     prot opt source               destination
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:80 to:172.17.0.2:80
  • Weave
    Weave creates a virtual network that connects Docker containers deployed across multiple hosts.
## install
$ curl -L https://github.com/zettio/weave/releases/download/latest_release/weave > /usr/local/bin/weave
$ chmod a+x /usr/local/bin/weave

## launch weave router on each host
# on the first host
$(hostA) sudo weave launch
# on other hosts
$(hostB) sudo weave launch <first-host-IP-address>
# check status
$ sudo weave status

## interconnect docker containers across multiple hosts
# e.g.: create ubuntu container and attach to 10.0.0.0/24 with 10.0.0.1
$(hostA) sudo weave run 10.0.0.1/24 -t -i ubuntu
# create another container on same network with 10.0.0.2
$(hostB) sudo weave run 10.0.0.2/24 -t -i ubuntu

## create multiple virtual networks
# detach from a nrtwork and attach to another
$ sudo weave run 10.0.0.2/24 -t -i ubuntu
$ sudo weave attach 10.10.0.2/24 <container-id>

## integrate weave networks with host network
# assign 10.0.0.100 to hostA to it can connect to 10.0.0.0/24 network
$(hostA) sudo weave expose 10.0.0.100/24

from weave@xmodulo

Volumes

  • Using data volume
    Specially-designated directory within one or more containers that bypasses the Union File System. Can be shared and reused between containers, changes will not be included when you update an image, persist until no containers use them
'-v,--volume=[]' bind mount a volume from the host '-v /host:/container', from Docker '-v /container'

$ sudo docker run -d -P --name web -v /webapp IMAGE COMMAND
# mount a directory from your host into a container
... -v /src/webapp:/opt/webapp
# mount as read-only
... -v /src/webapp:/opt/webapp:ro
# mount a host file as a data volume
... -v $HOME/.bash_history:/.bash_history
  • Using named data volume container
    If you want to share between containers, or want to use from non-persistent containers, it’s best to create a named Data Volume Container, and then to mount the data from it.
'--volumes-from=[]' Mount volumes from the specified container(s)

# create named container with volume to share
$ sudo docker run -d -v /dbdata --name dbdata training/postgres

# use '--volumes-from' to mount the volume in another container
$ sudo docker run -d --volumes-from dbdata --name db1 training/postgres
# chain by mounting volumes from another container
$ sudo docker run -d --name db3 --volumes-from db1 training/postgres
$ sudo docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata

# create new container and restore/untar
$ sudo docker run -v /dbdata --name dbdata2 ubuntu /bin/bash
$ sudo docker run --volumes-from dbdata2 -v $(pwd):/backup busybox tar xvf /backup/backup.tar

Machine

Machine makes it really easy to create Docker hosts on your computer, on cloud providers and inside your own data center. It creates servers, installs Docker on them, then configures the Docker client to talk to them.

  • With a local VM (virtualbox)
## install
$ curl -L https://github.com/docker/machine/releases/download/v0.1.0/docker-compose-'uname -s'-'uname -m' > /usr/local/bin/docker-machine
$ chmod +x /usr/local/bin/docker-machine

## creates a VM in virtualbox running boot2docker with docker daemon
$ docker-machine create --driver virtualbox dev
$ docker-machine env dev
export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH=/home/ehazlett/.docker/machines/.client
export DOCKER_HOST=tcp://192.168.99.100:2376
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL
dev * virtualbox Running tcp://192.168.99.100:2376

## run docker commands as usual
$ $(docker-machine config dev)
$ docker run busybox echo hello world

## stop/start machine
$ docker-machine stop|start
  • With a cloud provider (digitalocean, aws, gce, azure)
## go to digitalocean's admin, generate a new write token and grab hex string
$ docker-machine create --driver digitalocean --digitalocean-access-token 0ab77166d407f479c6701652cee3a46830fef88b8199722b87821621736ab2d4 staging
$ docker-machine active dev
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL
dev virtualbox Running tcp://192.168.99.103:2376
staging * digitalocean Running tcp://104.236.50.118:2376
  • Add host without driver. Used an alias for an existing host so you don’t have to type out the URL every time you run a Docker command
$ docker-machine create --url=tcp://50.134.234.20:2376 custombox
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL
custombox * none Running tcp://50.134.234.20:2376
  • With Docker Swarm. Used to provision Swarm clusters
## generate swarm token
$ docker-machine create -d virtualbox local
$ $(docker-machine env local)
$ docker run swarm create
1257e0f0bbb499b5cd04b4c9bdb2dab3

# create swarm master
$ docker-machine create -d virtualbox --swarm --swarm-master 
--swarm-discovery token:// swarm-master

# create more swarm nodes
$ docker-machine create -d virtualbox --swarm 
--swarm-discovery token:// swarm-node-00

# to connect to the swarm master
$ $(docker-machine env --swarm swarm-master)

# to query
$ docker info
Containers: 1
Nodes: 1
swarm-master: 192.168.99.100:2376
  Containers: 2

Compose

Compose is a tool for defining and running complex applications with Docker. With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running. Its replacement for fig.

## install
$ curl -L https://github.com/docker/compose/releases/download/1.1.0/docker-compose-'uname -s'-'uname -m' > /usr/local/bin/docker-compose
$ chmod +x /usr/local/bin/docker-compose

## create application
$ cat app.py
from flask import Flask
from redis import Redis
import os
app = Flask(__name__)
redis = Redis(host='redis', port=6379)
@app.route('/')
def hello():
redis.incr('hits')
return 'Hello World! I have been seen %s times.' % redis.get('hits')

if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True)
$ cat requirements.txt
flask
redis

## create image with Dockerfile
FROM python:2.7
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt

## define services in docker-compose.yml
web:
build: .
command: python app.py
ports: # forward the exposed port on the container to port the host machine
- "5000:5000"
volumes: # mount current directory inside container to avoid rebuild image
- .:/code
links: # connect up the Redis service
- redis
redis:
image: redis

## build and run (in daemon background)
$ docker-compose up -d
Starting composetest_redis_1...
Starting composetest_web_1...
$ docker-compose ps
...

$ docker-compose stop

Swarm

Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual host. It ships with a simple scheduling backend out of the box, and as initial development settles, an API will develop to enable pluggable backends.

Each swarm node will run a swarm node agent which will register the referenced Docker daemon, and will then monitor it, updating the discovery backend to its status.

# create a cluster
$ docker run --rm swarm create
6856663cdefdec325839a4b7e1de38e8 # <- this is your unique

# on each of your nodes, start the swarm agent
# doesn't have to be public (eg. 192.168.0.X),
# as long as the swarm manager can access it.
$ docker run -d swarm join --addr= token://

# start the manager on any machine or your laptop
$ docker run -d -p :2375 swarm manage token://

# use the regular docker cli
$ docker -H tcp:// info
$ docker -H tcp:// run ...
$ docker -H tcp:// ps
$ docker -H tcp:// logs ...
...

# list nodes in your cluster
$ docker run --rm swarm list token://

References

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s