Using Docker Linux containers (containers, images, machine, compose, swarm, weave)

Docker/docker@wiki is a platform for developers and sysadmins to develop, ship, and run applications.

It uses resource isolation features of the Linux kernel such as cgroups and kernel namespaces to allow independent “containers” to run within a single Linux instance, avoiding the overhead of starting virtual machines. It includes the libcontainer library as its own way to directly use virtualization facilities provided by the Linux kernel, in addition to using abstracted virtualization interfaces via libvirt, LXC (Linux Containers) and systemd-nspawn.

## install ( either from repos
$ sudo yum install docker (el7) | docker-io (el6 using EPEL) | sudo apt-get install (deb/ubuntu)
# or binary
$ wget -O docker
$ chmod +x docker ; sudo ./docker -d &

$ docker version
Client version: 1.5.0
  1. Dockerizing
  2. Containers
  3. Images
  4. Linking/Networking
  5. Volumes
  6. Machine
  7. Compose
  8. Swarm


Docker allows you to run applications inside containers. Running an application inside a container takes a single command: docker run.

$ sudo docker run ubuntu /bin/echo 'Hello world'
Hello world

# interactive container
$ sudo docker run -t -i ubuntu /bin/bash

from Dockerizing Application


# list containers
'-a,--all=false' show all containers, including stopped
$ docker ps -a

# run/create container from image
'-i,--interactive=false' keep STDIN open even if not attached
'-d,--detach=false' detached mode, run in the background
'-t,--tty=false' allocate a pseudo-TTY
$ docker run -i -t IMAGE CMD

# attach to running container
$ docker ... -d
$ docker attach CONTAINER

# shows container stdout
$ docker logs -f CONTAINER

# stop/start/kill container
$ docker stop/start/kill CONTAINER

# remove containers
$ docker rm CONTAINER
# stop and delete all containers
$ docker kill $(docker ps -a -q) ; docker rm $(docker ps -a -q)

# run command in running container
$ docker exec -d CONTAINER CMD

from Working with Containers

  • Restart/Autostart
# restart
--restart="no|on-failure|always" restart policy to apply when a container exits
$ sudo docker run --restart=always redis

# autostart with systemd
$ cat {/etc,/run,/usr/lib}/systemd/system/redis.service
Description=Redis container
ExecStart=/usr/bin/docker start -a redis_server
ExecStop=/usr/bin/docker stop -t 2 redis_server
  • Resource constraints
# when using systemd, every process will be placed in a cgroups tree, see
$ systemd-cgls

# top/meminfo arent cgroups aware use systemd-cgtop instead, see
$ systemd-cgtop
Path Tasks %CPU Memory Input/s Output/s

'-c,--cpu-shares=0' CPU shares (relative weight)
'--cpuset=' CPUs in which to allow execution (0-3, 0,1)
$ docker run -it --rm -c 512 stress --cpu 4
$ docker run -it --rm --cpuset=0,1 stress --cpu 2

# change share in running container, see man systemd.resource-control
$ systemctl show docker-$FULL_CONTAINER_ID.scope
$ sudo systemctl set-property docker-$FULL_CONTAINER_ID.scope CPUShares=512
$ ls /sys/fs/cgroup/cpu/system.slice/docker-$FULL_CONTAINER_ID.scope/

'-m,--memory=' memory limit (memory+swap)
$ docker run -it --rm -m 128m fedora bash
$ ls /sys/fs/cgroup/memory/system.slice/docker-$FULL_CONTAINER_ID.scope/

# limiting write speed: 1) get container filesystem id using mount or nsenter, 2) change BlockIOWriteBandwidth
$ mount
$ sudo systemctl set-property --runtime docker-$FULL_CONTAINER_ID.scope "BlockIOWriteBandwidth=$CONTAINER_FS_ID 10M"
# use BlockIOReadBandwidth to limit read speed
# use '--storage-opt' to limit, defaults to 10G
$ docker -d --storage-opt dm.basesize=5G
$ cat /sys/fs/cgroup/blkio/system.slice/docker-$FULL_CONTAINER_ID.scope/

from runtime constraints on cpu and memory and systemd resources


Read-only template out of which docker containers are instantiated, can either be defined by Dockerfiles, commiting a container or downloading from a Docker Hub Registry.

# list images
$ docker images
ubuntu 13.10 5e019ab7bf6d 4 weeks ago 180 MB

# search for images
$ docker search centos

# download new image
$ docker pull centos

# remove images
$ docker rmi IMAGE
# delete all ‘untagged/dangling’ images
$ docker rmi $(docker images -q -f dangling=true)

# create new image from a container's changes

# tag an image
$ docker tag IMAGE USR/IMAGE:TAG

# push image to hub (needs write permissions)
$ docker push USER/IMAGE:TAG

from Working with Docker Images

# building an image from a Dockerfile
$ cat Dockerfile
FROM ubuntu:14.04
MAINTAINER Kate Smith <>
RUN apt-get update && apt-get install -y ruby ruby-dev
RUN gem install sinatra
$ docker build -t USER/IMAGE:TAG
# create a full image using tar
$ sudo debootstrap raring raring > /dev/null
$ sudo tar -C raring -c . | sudo docker import - raring
$ sudo docker run raring cat /etc/lsb-release

# creating a simple base image using scratch
$ tar cv --files-from /dev/null | docker import - scratch
$ cat Dockerfile
FROM scratch


'-p,--publish=[]' publish a container's port to the host

$ sudo docker run -d -p 5000:5000 IMAGE COMMAND
# bind to ip address
... -p
# dynamic port
... -p
# also udp
... -p
'--link=[]' add link to another container in the form of 'name:alias'

$ sudo docker run -d --name db training/postgres
$ sudo docker run --rm --name web --link db:db training/webapp env
# use env vars to configure app to connect to db container, connection will be secure and private; only the linked web container will be able to talk to the db container
# also adds a host entry for the source container to the /etc/hosts file. Here's an entry for the web container
$ sudo docker run -t -i --rm --link db:db training/webapp cat /etc/hosts aed84ee21bde
. . . db
  • Networking
    When docker starts it creates a bridge docker0 network virtual interface with a random private address, and automatically forwards packets between any other network attached to it. This lets containers communicate with each other and with the host. Every time Docker creates a container, it creates a pair of “peer” interfaces that are like opposite ends of a pipe — a packet sent on one will be received on the other.
## communicate between container and world:
#1 enable IP forward: start docker server with '--ip-forward=true' (this adds 'echo 1 > /proc/sys/net/ipv4/ip_forward')
#2 iptables must allow: start docker server with '--iptables=true' (default) to append forwarding rules to DOCKER filter chain

## communication between containers
#1 by default docker will attach all containers to 'docker0' bridge interface
#2 iptables must allow: start docker server with '--iptables=true --icc=true' (default) to add rule to FORWARD chain with blanket ACCEPT policy

# when set '--icc=false' docker adds DROP rule and we should use '--link=CONTAINER_NAME_or_ID:ALIAS' to communicate (and you will see a ACCEPTS rule overriding DROP policy)
$ sudo iptables -L -n
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  
DROP       all  --  
Chain DOCKER (1 references)
target     prot opt source               destination
ACCEPT     tcp  --            tcp spt:80
ACCEPT     tcp  --            tcp dpt:80

## binding container ports to the host
# by default outside world cannot connect to containers, either
'-P,--publish-all=true' to EXPOSE all ports, or
'-p,--publish=SPEC' to explicit which ports to expose, e.g.: using '-p 80:80'
$ iptables -t nat -L -n
Chain DOCKER (2 references)
target     prot opt source               destination
DNAT       tcp  --              tcp dpt:80 to:
  • Weave
    Weave creates a virtual network that connects Docker containers deployed across multiple hosts.
## install
$ curl -L > /usr/local/bin/weave
$ chmod a+x /usr/local/bin/weave

## launch weave router on each host
# on the first host
$(hostA) sudo weave launch
# on other hosts
$(hostB) sudo weave launch <first-host-IP-address>
# check status
$ sudo weave status

## interconnect docker containers across multiple hosts
# e.g.: create ubuntu container and attach to with
$(hostA) sudo weave run -t -i ubuntu
# create another container on same network with
$(hostB) sudo weave run -t -i ubuntu

## create multiple virtual networks
# detach from a nrtwork and attach to another
$ sudo weave run -t -i ubuntu
$ sudo weave attach <container-id>

## integrate weave networks with host network
# assign to hostA to it can connect to network
$(hostA) sudo weave expose

from weave@xmodulo


  • Using data volume
    Specially-designated directory within one or more containers that bypasses the Union File System. Can be shared and reused between containers, changes will not be included when you update an image, persist until no containers use them
'-v,--volume=[]' bind mount a volume from the host '-v /host:/container', from Docker '-v /container'

$ sudo docker run -d -P --name web -v /webapp IMAGE COMMAND
# mount a directory from your host into a container
... -v /src/webapp:/opt/webapp
# mount as read-only
... -v /src/webapp:/opt/webapp:ro
# mount a host file as a data volume
... -v $HOME/.bash_history:/.bash_history
  • Using named data volume container
    If you want to share between containers, or want to use from non-persistent containers, it’s best to create a named Data Volume Container, and then to mount the data from it.
'--volumes-from=[]' Mount volumes from the specified container(s)

# create named container with volume to share
$ sudo docker run -d -v /dbdata --name dbdata training/postgres

# use '--volumes-from' to mount the volume in another container
$ sudo docker run -d --volumes-from dbdata --name db1 training/postgres
# chain by mounting volumes from another container
$ sudo docker run -d --name db3 --volumes-from db1 training/postgres
$ sudo docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata

# create new container and restore/untar
$ sudo docker run -v /dbdata --name dbdata2 ubuntu /bin/bash
$ sudo docker run --volumes-from dbdata2 -v $(pwd):/backup busybox tar xvf /backup/backup.tar


Machine makes it really easy to create Docker hosts on your computer, on cloud providers and inside your own data center. It creates servers, installs Docker on them, then configures the Docker client to talk to them.

  • With a local VM (virtualbox)
## install
$ curl -L'uname -s'-'uname -m' > /usr/local/bin/docker-machine
$ chmod +x /usr/local/bin/docker-machine

## creates a VM in virtualbox running boot2docker with docker daemon
$ docker-machine create --driver virtualbox dev
$ docker-machine env dev
export DOCKER_CERT_PATH=/home/ehazlett/.docker/machines/.client
export DOCKER_HOST=tcp://
$ docker-machine ls
dev * virtualbox Running tcp://

## run docker commands as usual
$ $(docker-machine config dev)
$ docker run busybox echo hello world

## stop/start machine
$ docker-machine stop|start
  • With a cloud provider (digitalocean, aws, gce, azure)
## go to digitalocean's admin, generate a new write token and grab hex string
$ docker-machine create --driver digitalocean --digitalocean-access-token 0ab77166d407f479c6701652cee3a46830fef88b8199722b87821621736ab2d4 staging
$ docker-machine active dev
$ docker-machine ls
dev virtualbox Running tcp://
staging * digitalocean Running tcp://
  • Add host without driver. Used an alias for an existing host so you don’t have to type out the URL every time you run a Docker command
$ docker-machine create --url=tcp:// custombox
$ docker-machine ls
custombox * none Running tcp://
  • With Docker Swarm. Used to provision Swarm clusters
## generate swarm token
$ docker-machine create -d virtualbox local
$ $(docker-machine env local)
$ docker run swarm create

# create swarm master
$ docker-machine create -d virtualbox --swarm --swarm-master 
--swarm-discovery token:// swarm-master

# create more swarm nodes
$ docker-machine create -d virtualbox --swarm 
--swarm-discovery token:// swarm-node-00

# to connect to the swarm master
$ $(docker-machine env --swarm swarm-master)

# to query
$ docker info
Containers: 1
Nodes: 1
  Containers: 2


Compose is a tool for defining and running complex applications with Docker. With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running. Its replacement for fig.

## install
$ curl -L'uname -s'-'uname -m' > /usr/local/bin/docker-compose
$ chmod +x /usr/local/bin/docker-compose

## create application
$ cat
from flask import Flask
from redis import Redis
import os
app = Flask(__name__)
redis = Redis(host='redis', port=6379)
def hello():
return 'Hello World! I have been seen %s times.' % redis.get('hits')

if __name__ == "__main__":"", debug=True)
$ cat requirements.txt

## create image with Dockerfile
FROM python:2.7
ADD . /code
RUN pip install -r requirements.txt

## define services in docker-compose.yml
build: .
command: python
ports: # forward the exposed port on the container to port the host machine
- "5000:5000"
volumes: # mount current directory inside container to avoid rebuild image
- .:/code
links: # connect up the Redis service
- redis
image: redis

## build and run (in daemon background)
$ docker-compose up -d
Starting composetest_redis_1...
Starting composetest_web_1...
$ docker-compose ps

$ docker-compose stop


Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual host. It ships with a simple scheduling backend out of the box, and as initial development settles, an API will develop to enable pluggable backends.

Each swarm node will run a swarm node agent which will register the referenced Docker daemon, and will then monitor it, updating the discovery backend to its status.

# create a cluster
$ docker run --rm swarm create
6856663cdefdec325839a4b7e1de38e8 # <- this is your unique

# on each of your nodes, start the swarm agent
# doesn't have to be public (eg. 192.168.0.X),
# as long as the swarm manager can access it.
$ docker run -d swarm join --addr= token://

# start the manager on any machine or your laptop
$ docker run -d -p :2375 swarm manage token://

# use the regular docker cli
$ docker -H tcp:// info
$ docker -H tcp:// run ...
$ docker -H tcp:// ps
$ docker -H tcp:// logs ...

# list nodes in your cluster
$ docker run --rm swarm list token://



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s