Month: March 2015

How to do network bridging in Linux (using initscripts/ifcfg/ifupdown, brctl/bridge-utils, nmcli/networkmanager, netctl/arch and systemd-networkd)

Network bridge is Link Layer device which forwards traffic between networks based on MAC addresses and is therefore also referred to as a Layer 2 device.

It makes forwarding decisions based on tables of MAC addresses which it builds by learning what hosts are connected to each network. A software bridge can be used within a Linux host in order to emulate a hardware bridge, for example in virtualization applications for sharing a NIC with one or more virtual NICs.

  • ifup/ifdown@man network interface configuration files used by ifup/ifdown, called by initscripts (used prior to systemd). Works all distros (but configuration file location and syntax changes).
$ apt-get|yum install bridge-utils

# disable network manager
$ sudo service NetworkManager stop ; sudo chkconfig NetworkManager off

## fedora/rhel
$ vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BRIDGE=br0
$ vi /etc/sysconfig/network-scripts/ifcfg-br0
DEVICE=br0
NM_CONTROLLED=yes
ONBOOT=yes
TYPE=Bridge
# either static
BOOTPROTO=none
IPADDR=10.10.1.105
NETMASK=255.255.255.0
GATEWAY=10.10.1.1
DNS1=8.8.8.8
DNS2=8.8.4.4
# or dhcp
#BOOTPROTO=dhcp
$ service network restart

## bridging in debian/ubuntu
$ vi /etc/network/interfaces
#auto eth0
#iface eth0 inet dhcp
# either static
iface br0 inet static
  bridge_ports eth0 eth1
  address 10.10.1.105
  broadcast 10.10.1.255
  netmask 255.255.255.0
  gateway 10.10.1.1
# or dhcp
auto br0
iface br0 inet dhcp
  bridge_ports eth0
  bridge_stp off
  bridge_fd 0
  bridge_maxwait 0
$ service networking restart

## manual, non-persistent using brctl
$ brctl addbr br0
$ brctl addif br0 eth0
# assign ip to bridge
$ ip link set dev br0 up
$ ip addr add dev br0 10.10.1.105/24
# delete
$ ip link set dev eth0 promisc on
$ ip link set dev eth0 master br0

## manual, non-persistent using iproute2/net-tools
$ ip link add name br0 type bridge
$ ip link set dev eth0 promisc on
$ ip link set dev eth0 master br0
# assign ip to bridge
$ ip link set dev br0 up
$ ip addr add dev br0 10.10.1.105/24
# delete
$ ip link set eth0 promisc off
$ ip link set dev eth0 nomaster
$ ip link delete br0 type bridge

from bridge@rhel, bridge@debian and bridge@rhel

# install
$ sudo yum install NetworkManager | sudo apt-get install network-manager | sudo pacman -Sy networkmanager

# create bridge
$ nmcli con add type bridge autoconnect yes con-name br0 ifname br0

# assign ip either static
$ nmcli con mod br0 ipv4.addresses "10.10.1.105/24 10.10.1.1" ipv4.method manual 
$ nmcli con mod br0 ipv4.dns "8.8.8.8 8.8.4.4"
# or dhcp
$ nmcli con mod br0 ipv4.method auto

# remove current setting and add interface to bridge
$ nmcli c delete eth0
$ nmcli c add type bridge-slave autoconnect yes con-name eth0 ifname eth0 master br0

$ systemctl restart NetworkManager | service network-manager restart

from nmcli@rhel7

  • netctl@arch is a CLI-based tool used to configure and manage network connections via profiles. Arch only.
## install
$ sudo pacman -Sy netctl

## create 'br0' with real eethernet adaptor 'eth0' and 'tap0' tap device
$ cp /etc/netctl/examples/bridge /etc/netctl/bridge
$ vi /etc/netctl/bridge
Description="Example Bridge connection"
Interface=br0
Connection=bridge
BindsToInterfaces=(eth0 tap0)
# either static
IP=static
Address='192.168.10.20/24'
Gateway='192.168.10.200'
SkipForwardingDelay=yes # ignore (R)STP and immediately activate the bridge
# or dynamic
#IP=dhcp

$ netctl enable bridge ; netctl start bridge

from Bridge with netctl@arch

  • systemd.network as of version 210, systemd supports basic network configuration through udev and networkd.
# disable network manager
$ systemctl disable NetworkManager
# enable daemons
$ systemctl enable systemd-networkd
$ systemctl restart systemd-networkd
$ systemctl enable systemd-resolved
$ ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf

# create bridge
$ vi /etc/systemd/network/br0.netdev 
[NetDev]
Name=br0
Kind=bridge
$ vi /etc/systemd/network/br0.network
[Match]
Name=br0
[Network]
# static
DNS=192.168.250.1
Address=192.168.250.33/24
Gateway=192.168.250.1
# or dynamic
#DHCP=v4

# assign network adaptor
$ vi /etc/systemd/network/uplink.network
[Match]
Name=en*
[Network]
Bridge=br0

# using in container
$ systemd-nspawn --network-bridge=br0 -bD /path_to/my_container

from Network bridge@arch

Advertisements

How to generate SSL CSR (certificate signing request) and self-signed certificates for Apache/Nginx (using OpenSSL)

  • OpenSSL is an open-source implementation of the SSL and TLS protocols.
'req' PKCS#10 certificate request and certificate generating utility.
'-x509' outputs a self signed certificate instead of a certificate request
'-newkey alg:file' creates a new certificate request and a new private key
'-keyout filename' filename to write the newly created private key to
'-out filename' filename to write to
'-days n' number of days to certify the certificate for, defaults to 30 for x509

# create private key 'key.pem' and generate a certificate signing request 'req.pem'
$ openssl req -newkey rsa:1024 -keyout key.pem -out req.pem
or
$ openssl genrsa -out key.pem 1024 ; openssl req -new -key key.pem -out req.pem

# generate a self signed root certificate 'cert.pem' and private key 'key.pem'
$ openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 365

from openssl-req@man

'-nodes' if a private key is created it will not be encrypted

# generate a self signed root certificate '$CERT.csr' for apache, and private key '$CERT.key'
$ export CERT=/etc/httpd/ssl/server
$ openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:2048 -out $CERT.key
$ chmod 600 $CERT.key
$ openssl req -new -key $CERT.key -out $CERT.csr
$ openssl x509 -req -in $CERT.csr -signkey $CERT.key -out $CERT.crt -days 365
# edit SSLCertificateFile $CERT.crt and SSLCertificateKeyFile $CERT.key

# same
$ export CERT=/etc/httpd/ssl/server
$ openssl req -x509 -nodes -newkey rsa:2048 -keyout $CERT.key -out $CERT.crt -days 365

# same but using 'make testcert'
$ cd /usr/share/ssl/certs ; make testcert

# same but using 'crypto-utils'
$ sudo yum install crypto-utils | sudo apt-get install crypto-utils
$ genkey your_FQDN
# edit SSLCertificateFile and SSLCertificateKeyFile

from How to Create Self-Signed SSL Certificates and Keys for Apache

$ nginx -V
TLS SNI support enabled
$ mkdir -p /etc/nginx/ssl/ ; cd $_

# create private key; asks for passphrase
$ openssl genrsa -des3 -out self-ssl.key 2048
# create a certificate signing request - CSR
$ openssl req -new -key self-ssl.key -out self-ssl.csr
# optional remove passphrase
$ cp -v self-ssl.{key,original} ; openssl rsa -in self-ssl.original -out self-ssl.key ; rm -v self-ssl.original
# create certificate
$ openssl x509 -req -days 365 -in self-ssl.csr -signkey self-ssl.key -out self-ssl.crt
# configure nginx
$ cat etc/nginx/virtual/.conf
server {
  listen 443;
  ssl on;
  ssl_certificate /path/to/self-ssl.crt;
  ssl_certificate_key /path/to/self-ssl.key;
  server_name theos.in;
}

# verify certificates
$ openssl verify pem-file
$ openssl verify self-ssl.crt

from HowTo: Create a Self-Signed SSL Certificate on Nginx For CentOS / RHEL

Using bhyve (BSD hypervisor)

BHyVe is a type-2 hypervisor developed on FreeBSD. Its similar to KVM, and a different approach to jail/lxc containers.

Requires Intel VT-x and Extended Page Tables (EPT). BIOS/UEFI support is in progress. Minimal device emulation support (virtio-blk, virtio-net).

Guest OS supported are FreeBSD/amd64 using bhyveload@man and OpenBSD/Linux using grub2-bhyve.

It uses bhyve@man a userland part of hypervisor that emulates devices, bhyvectl as a management tool, libvmmapi a userland api (wrapper library of /dev/vmm operations) and vmm.ko a kernel part of hypervisor.

Preparing host

# detect vmx
$ dmesg | grep vmx
vmx_init: processor does not support VMX operation

# load kernel module
$ kldload vmm
# create tap device and bridge network interface
$ ifconfig tap0 create
$ sysctl net.link.tap.up_on_open=1
$ ifconfig bridge0 create
# add lan 'em0' and tap interfaces to the bridge
$ ifconfig bridge0 addm em0 addm tap0
$ ifconfig bridge0 up

## persistent configuration: start bhyve guests at boot time
$ vi /etc/sysctl.conf
net.link.tap.up_on_open=1
$ vi /boot/loader.conf
vmm_load="YES"
nmdm_load="YES"
if_bridge_load="YES"
if_tap_load="YES"
$ vi /etc/rc.conf
cloned_interfaces="bridge0 tap0"
ifconfig_bridge0="addm igb0 addm tap0"

## delete/destroy network interface
$ ifconfig bridge0 down
$ ifconfig bridge0 deletem em0 deletem tap0
$ ifconfig bridge0 destroy

Creating guests

# load a guest
bhyveload -m ${mem} -d ${disk} ${name}
# run it
bhyve -c ${cpus} -m ${mem} -s 0,hostbridge -s 2,virtio-blk,${disk} 
  -s 3,virtio-net,${tap} -s 31,lpc -l com1,stdio vm0
  • FreeBSD
$ fetch ftp://ftp.freebsd.org/pub/FreeBSD/ISO-IMAGES-amd64/10.1/FreeBSD-10.1-RELEASE-amd64-bootonly.iso

# create virtual disk
$ truncate -s 16G guest.img
# 'vmrun.sh' shell wrapper that loads VM and starts it in a loop
$ /usr/share/examples/bhyve/vmrun.sh -c 4 -m 1024M -t tap0 
  -d guest.img -i -I FreeBSD-10.1-RELEASE-amd64-bootonly.iso guestname
# install guest and in the end enter shell
# (prior to 10.0 only) edit '/etc/ttys' and 
  ttyu0 "/usr/libexec/getty 3wire" xterm on secure
# reboot

# to start guest from virtual disk do
$ /usr/share/examples/bhyve/vmrun.sh -c 4 -m 1024M -t tap0 
  -d guest.img guestname
  • OpenBSD
$ fetch http://people.freebsd.org/~grehan/flashimg.amd64-20131014.bz2
$ bunzip2 flashimg.amd64-20131014.bz2

# install port grub2-bhyve
$ cd /usr/ports/sysutils/grub2-bhyve
$ make install clean
# create tap device and add-it to bridge
$ ifconfig tap1 create
$ ifconfig bridge0 addm tap1

# create a grub device map
$ vi obsd.map
(hd0) ./obsd.img
# boot/load image
$ grub-bhyve -m obsd.map -r hd0 -M 512 obsd
# on boot prompt type
kopenbsd -h com0 (hd0,openbsd1)/bsd
boot
# start guest vm, use root:test123
$ bhyve -c 2 -m 512M -A -H -P -s 0:0,amd_hostbridge -s \
  1:0,lpc -s 2:0,virtio-net,tap1 -s 3:0,virtio-blk,obsd.img \
  -l com1,stdio -W obsd
# cleanup
$ bhyvectl --destroy --vm=obsd

see create USB flash installer for OpenBSD

  • Linux
$ fetch http://releases.ubuntu.com/14.10/ubuntu-14.10-server-amd64.iso \
  -o somelinux.iso

# create virtual disk
$ truncate -s 16G linux.img
# create grub device map
$ vi device.map
(hd0) ./linux.img
(cd0) ./somelinux.iso
# boot/load image
$ grub-bhyve -m device.map -r cd0 -M 1024M linuxguest
# install guest and reboot
$ bhyve -AI -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap1 \
  -s 3:0,virtio-blk,./linux.img -s 4:0,ahci-cd,./somelinux.iso \
  -l com1,stdio -c 4 -m 1024M linuxguest
# after install stop guest
$ bhyvectl --destroy --vm=linuxguest

# now can start from virtual disk, and boot
$ grub-bhyve -m device.map -r hd0,msdos1 -M 1024M linuxguest
$ bhyve -AI -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap1 \
  -s 3:0,virtio-blk,./linux.img -l com1,stdio -c 4 -m 1024M linuxguest

see FreeBSD as a Host with bhyve and Virtualization with bhyve @ BSDNow.

Virtual Machine Consoles

# use null modem device *nmdm* kernel module to wrap the bhyve console
$ kldload nmdm
$ bhyve -AI -H -P -s 0:0,hostbridge -s 1:0,lpc \
  -s 2:0,virtio-net,tap1 -s 3:0,virtio-blk,./linux.img \
  -l com1,/dev/nmdm0A -c 4 -m 1024M linuxguest

# use 'cu' to attach/reattach to the console
$ cu -l /dev/nmdm0B -s 9600
Connected
...

Managing Virtual Machines

# device node is created in /dev/vmm for each virtual machine
$ ls /dev/vmm
guestname linuxguest

# destroy a VM
$ bhyvectl --destroy --vm=guestname

How to load balance an HTTP server (using with HAProxy or Pound)

HAProxy

HAProxy/haproxy@man is a TCP/HTTP reverse proxy which is particularly suited for high availability environments.

## install
$ sudo apt-get install haproxy | sudo yum install haproxy

## configure
$ vi /etc/haproxy/haproxy.cfg
global
  daemon                             # fork into background
  maxconn 256                        # max per-process concurrent connections
defaults    
  mode http                          # 'tcp' for layer4 ssl,ssh,smtp; 'http' for layer7
  timeout connect 5s
  timeout client 50s
  timeout server 50s
frontend http-in
   bind *:80                         # frontend bindto 'ip:port' listener
   reqadd X-Forwarded-Proto: http    # add http header to request
   default_backend servers           # backend used when no "use_backend" has been matched
backend servers
   stats enable
   stats hide-version
   stats uri /stats
   stats realm Haproxy Statistics
   stats auth haproxy:redhat         # credentials for HAProxy Statistic report page
   balance roundrobin                # roundrobin according to their using weights
   cookie LB insert
   server web1-srv 192.168.0.121:80 cookie web1-srv check
   server web2-srv 192.168.0.121:80 cookie web2-srv check
   server web3-srv 192.168.0.123:80 cookie web3-srv check
   server web4-srv 192.168.0.124:80 check backup
   server server1 127.0.0.1:8000 maxconn 32 

from haproxy@tecmint, haproxy@xmodulo, haproxy@digitalocean and haproxy@doc

  • Load balancing algorithms used to select a backend: roundrobin according to their weights, static-rr same as roundrobin but changing server’s weight on the fly has no effect, lastconn lowest number of connections is selected, first available with connection slots is selected, source hashs source ip hash ensuring same client ip reaches same server, uri hashs part or whole uri to select server, url_param hashs url get or post parameter value to select server, hrd() hashs http header value.
balance roundrobin
balance url_param userid
balance url_param session_id check_post 64
balance hdr(User-Agent)
balance hdr(host)
balance hdr(Host) use_domain_only
  • Using ACLs to select backends: default_backend specify the backend to use when no use_backend rule has been matched, use_backend switch to a specific backend if/unless an ACL-based condition is matched
# use_backend <backend> [{if | unless} <condition>]

# by url
acl url_blog path_beg /blog
use_backend blog-backend if url_blog
default_backend web-backend
  • Server options/params: backup only use this server if all other non-backups are unavailable, cookie cookie value assigned to server used for persistency/affinity, check server availability by making periodic tcp connections, inter interval in ms between checks, maxconn if number of incoming concurrent requests goes higher than this value they will be queued, maxqueue * maximal number of connections which will wait in server queue, *ssl enables SSL ciphering on outgoing connections, *weight * server’s weight relative to other servers
# server <name> <address>[:[port]] [param*]
server first  10.1.1.1:1080 cookie first  check inter 1000
server second 10.1.1.2:1080 cookie second check inter 1000
server transp ipv4@
server backup ${SRV_BACKUP}:1080 backup
server www1_dc1 ${LAN_DC1}.101:80
server www1_dc2 ${LAN_DC2}.101:80
  • Cookie-based backend persistence/affinity: rewrite cookie is provided by server and haproxy should modify it, insert cookie is provided/inserted by haproxy, prefix use an exiting cookie, indirect no cookie will be emitted to a client which already has a valid one for the server which has processed the request, preserve emit a persistent cookie
#cookie <name> [options*]
cookie JSESSIONID prefix
cookie SRV insert indirect nocache
cookie SRV insert postonly indirect
cookie SRV insert indirect nocache maxidle 30m maxlife 8h
  • Stats: stats admin enables the statistics admin level if/unless a condition is matched, stats auth enables statistics with default settings, and restricts access to declared users only
## stats admin { if | unless } <cond>
# enable stats only for localhost
backend stats_localhost
  stats enable
  stats admin if LOCALHOST
# statistics admin level always enabled because of the authentication
backend stats_auth
  stats enable
  stats auth  admin:AdMiN123
  stats admin if TRUE

## stats auth <user>:<passwd>
# public access (limited to this backend only)
backend public_www
  server srv1 192.168.0.1:80
  stats enable
  stats hide-version
  stats scope   .
  stats uri     /admin?stats
  stats realm   Haproxy Statistics
  stats auth    admin1:AdMiN123
  stats auth    admin2:AdMiN321
# internal monitoring access (unlimited)
backend private_monitoring
  stats enable
  stats uri     /admin?stats
  stats refresh 5s
  • Logs: log adds a global syslog server, facility syslog facilities
global
  # log <address> [len <length>] <facility> [max level [min level]]
  log 127.0.0.1 local2

# enable udp syslog receiver and facility
$ cat /etc/rsyslog.d/haproxy.conf
local2.*    /var/log/haproxy.log
$ vi /etc/rsyslog.conf
$ModLoad imudp
$UDPServerRun 514
$ service rsyslog restart 
  • SSL: terminating/decrypting an SSL connection at the load balancer and sending unencrypted connections to the backend servers, pass-through sends SSL connections directly to the proxied servers is more secure but losses ability to get X-Forwarded- headers
$ sudo yum install openssl | sudo apt-get install openssl
# generate self-signed certificate for ssl termination only
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/server.key -out /etc/ssl/server.crt
$ cat server.crt server.key > server.pem

## ssl termination
defaults
  mode http
frontend localhost
  bind *:80
  bind *:443 ssl crt server.pem
  redirect scheme https if !{ ssl_fc }   # ssl-only: redirect from http to https
  default_backend nodes

## ssl pass-through
defaults
  option tcplog
  mode tcp          # need to be tcp since haproxy treats connections just as a stream
frontend localhost
  bind *:80
  bind *:443
  default_backend nodes

from haproxy with ssl

Pound

pound/pound@man is a lightweight open source reverse proxy, load balancer and SSL wrapper used as a web server load balancing solution:

  • Listeners: how (ip+port) to receive requests from the clients, HTTP and/or HTTPS
  • Services: matching requests (by URL pattern, HTTP header and/or session) with list of back ends. Session can be matched by ip address, basic authentication, url parameter, cookie value or http header
  • Backend: list of (ip+port) web servers, optionally with priority
## install
$ sudo yum install pound (EPEL) | sudo apt-get install pound | cd /usr/ports/www/pound/ && make install clean

## configure
$ vi /etc/pound/pound.cfg | vi /etc/pound.cfg
# global options
User        "www-data"
Group       "www-data"
# logging (goes to syslog by default)
LogLevel    1
# check backend every X secs:
Alive       30

# main listening ports
ListenHTTP
  Address 202.54.1.5
  Port    80
End
ListenHTTPS
  Address 202.54.1.5
  Port    443
  Cert    "/etc/ssl/local.server.pem"
End

# image/static server
Service
  URL ".*.(jpg|gif)"
  BackEnd
    Address 192.168.1.10
    Port    80
  End
End

# virtual host www.mydomain.com (url-based session)
Service
  URL         ".*sessid=.*"
  HeadRequire "Host:.*www.mydomain.com.*"
  BackEnd
    Address 192.168.1.11
    Port    80
  End
  Session
    Type    PARM
    ID      "sessid"
    TTL     120
  End
End

# everything else (cookie-based session)
Service
  BackEnd
    Address 192.168.1.5
    Port    80
    Priority 5
  End
  BackEnd
    Address 192.168.1.6
    Port    80
    Priority 4
  End
  Session
    Type    COOKIE
    ID      "userid"
    TTL     180
  End
End

# restart
$ /etc/init.d/pound restart

from pound@cyberciti

How to cache HDDs with SSDs in Linux (using bcache)

Bcache (or block level cache) is a Linux kernel block layer cache. It allows one or more fast disk drives such as flash-based solid state drives (SSDs) to act as a cache for one or more slower hard disk drives. It is designed around the performance characteristics of SSDs.

## install
$ sudo yum install bcache-tools | sudo apt-get install bcache-tools

## using, assing HDD is /dev/sda and an SSD is /dev/sdb
# wipe (from util-linux) devices
$ wipefs -a /dev/sda1 ; wipefs -a /dev/sdb1

# format backup/hdd and cache/ssd devices
$ make-bcache -B /dev/sda1 ; make-bcache -C /dev/sdb1

# attach the cache device to our bcache device 'bcache0'
$ echo C_Set_UUID_VALUE > /sys/block/bcache0/bcache/attach

# create and mount fs
$ mkfs.ext4 /dev/bcache0
$ mount /dev/bcache0 /mnt

# optionally use faster writeback (instead of default writethrough)
$ echo writeback > /sys/block/bcache0/bcache/cache_mode
# same but permanently
$ echo /dev/sda1 > /sys/fs/bcache/register

# monitor 
$ bcache-status -s

from bcache and/vs. LVM cache

Using Docker Linux containers (containers, images, machine, compose, swarm, weave)

Docker/docker@wiki is a platform for developers and sysadmins to develop, ship, and run applications.

It uses resource isolation features of the Linux kernel such as cgroups and kernel namespaces to allow independent “containers” to run within a single Linux instance, avoiding the overhead of starting virtual machines. It includes the libcontainer library as its own way to directly use virtualization facilities provided by the Linux kernel, in addition to using abstracted virtualization interfaces via libvirt, LXC (Linux Containers) and systemd-nspawn.

## install (http://docs.docker.com/installation/) either from repos
$ sudo yum install docker (el7) | docker-io (el6 using EPEL) | sudo apt-get install docker.io (deb/ubuntu)
# or binary
$ wget https://get.docker.com/builds/Linux/x86_64/docker-latest -O docker
$ chmod +x docker ; sudo ./docker -d &

$ docker version
Client version: 1.5.0
...
  1. Dockerizing
  2. Containers
  3. Images
  4. Linking/Networking
  5. Volumes
  6. Machine
  7. Compose
  8. Swarm

Dockerizing

Docker allows you to run applications inside containers. Running an application inside a container takes a single command: docker run.

$ sudo docker run ubuntu /bin/echo 'Hello world'
Hello world

# interactive container
$ sudo docker run -t -i ubuntu /bin/bash
root@af8bae53bdd3:/#

from Dockerizing Application

Containers

# list containers
'-a,--all=false' show all containers, including stopped
$ docker ps -a

# run/create container from image
'-i,--interactive=false' keep STDIN open even if not attached
'-d,--detach=false' detached mode, run in the background
'-t,--tty=false' allocate a pseudo-TTY
$ docker run -i -t IMAGE CMD

# attach to running container
$ docker ... -d
$ docker attach CONTAINER

# shows container stdout
$ docker logs -f CONTAINER

# stop/start/kill container
$ docker stop/start/kill CONTAINER

# remove containers
$ docker rm CONTAINER
# stop and delete all containers
$ docker kill $(docker ps -a -q) ; docker rm $(docker ps -a -q)

# run command in running container
$ docker exec -d CONTAINER CMD

from Working with Containers

  • Restart/Autostart
# restart
--restart="no|on-failure|always" restart policy to apply when a container exits
$ sudo docker run --restart=always redis

# autostart with systemd
$ cat {/etc,/run,/usr/lib}/systemd/system/redis.service
[Unit]
Description=Redis container
Author=Me
After=docker.service
[Service]
Restart=always
ExecStart=/usr/bin/docker start -a redis_server
ExecStop=/usr/bin/docker stop -t 2 redis_server
[Install]
WantedBy=local.target
  • Resource constraints
# when using systemd, every process will be placed in a cgroups tree, see http://www.freedesktop.org/software/systemd/man/systemd-cgls.html
$ systemd-cgls

# top/meminfo arent cgroups aware use systemd-cgtop instead, see http://www.freedesktop.org/software/systemd/man/systemd-cgtop.html
$ systemd-cgtop
Path Tasks %CPU Memory Input/s Output/s
...

'-c,--cpu-shares=0' CPU shares (relative weight)
'--cpuset=' CPUs in which to allow execution (0-3, 0,1)
$ docker run -it --rm -c 512 stress --cpu 4
$ docker run -it --rm --cpuset=0,1 stress --cpu 2

# change share in running container, see man systemd.resource-control
$ systemctl show docker-$FULL_CONTAINER_ID.scope
$ sudo systemctl set-property docker-$FULL_CONTAINER_ID.scope CPUShares=512
$ ls /sys/fs/cgroup/cpu/system.slice/docker-$FULL_CONTAINER_ID.scope/

'-m,--memory=' memory limit (memory+swap)
$ docker run -it --rm -m 128m fedora bash
$ ls /sys/fs/cgroup/memory/system.slice/docker-$FULL_CONTAINER_ID.scope/

# limiting write speed: 1) get container filesystem id using mount or nsenter, 2) change BlockIOWriteBandwidth
$ mount
CONTAINER_FS_ID=/dev/mapper/docker-253:0-3408580-d2115072c442b0453b3df3b16e8366ac9fd3defd4cecd182317a6f195dab3b88
$ sudo systemctl set-property --runtime docker-$FULL_CONTAINER_ID.scope "BlockIOWriteBandwidth=$CONTAINER_FS_ID 10M"
# use BlockIOReadBandwidth to limit read speed
# use '--storage-opt' to limit, defaults to 10G
$ docker -d --storage-opt dm.basesize=5G
$ cat /sys/fs/cgroup/blkio/system.slice/docker-$FULL_CONTAINER_ID.scope/

from runtime constraints on cpu and memory and systemd resources

Images

Read-only template out of which docker containers are instantiated, can either be defined by Dockerfiles, commiting a container or downloading from a Docker Hub Registry.

# list images
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu 13.10 5e019ab7bf6d 4 weeks ago 180 MB
...

# search for images
$ docker search centos

# download new image
$ docker pull centos

# remove images
$ docker rmi IMAGE
# delete all ‘untagged/dangling’ images
$ docker rmi $(docker images -q -f dangling=true)

# create new image from a container's changes
$ docker commit CONTAINER NEW-IMAGE-NAME

# tag an image
$ docker tag IMAGE USR/IMAGE:TAG

# push image to hub (needs write permissions)
$ docker push USER/IMAGE:TAG

from Working with Docker Images

# building an image from a Dockerfile
$ cat Dockerfile
FROM ubuntu:14.04
MAINTAINER Kate Smith <ksmith@example.com>
RUN apt-get update && apt-get install -y ruby ruby-dev
RUN gem install sinatra
$ docker build -t USER/IMAGE:TAG
# create a full image using tar
$ sudo debootstrap raring raring > /dev/null
$ sudo tar -C raring -c . | sudo docker import - raring
$ sudo docker run raring cat /etc/lsb-release

# creating a simple base image using scratch
$ tar cv --files-from /dev/null | docker import - scratch
or
$ cat Dockerfile
FROM scratch
...

Linking/Networking

'-p,--publish=[]' publish a container's port to the host

$ sudo docker run -d -p 5000:5000 IMAGE COMMAND
# bind to ip address
... -p 127.0.0.1:5000:5000
# dynamic port
... -p 127.0.0.1::5000
# also udp
... -p 127.0.0.1:5000:5000/udp
'--link=[]' add link to another container in the form of 'name:alias'

$ sudo docker run -d --name db training/postgres
$ sudo docker run --rm --name web --link db:db training/webapp env
DB_NAME=/web/db
DB_PORT=tcp://172.17.0.5:5432
DB_PORT_5432_TCP=tcp://172.17.0.5:5432
DB_PORT_5432_TCP_PROTO=tcp
DB_PORT_5432_TCP_PORT=5432
DB_PORT_5432_TCP_ADDR=172.17.0.5
# use env vars to configure app to connect to db container, connection will be secure and private; only the linked web container will be able to talk to the db container
# also adds a host entry for the source container to the /etc/hosts file. Here's an entry for the web container
$ sudo docker run -t -i --rm --link db:db training/webapp cat /etc/hosts
172.17.0.7 aed84ee21bde
. . .
172.17.0.5 db
  • Networking
    When docker starts it creates a bridge docker0 network virtual interface with a random private address, and automatically forwards packets between any other network attached to it. This lets containers communicate with each other and with the host. Every time Docker creates a container, it creates a pair of “peer” interfaces that are like opposite ends of a pipe — a packet sent on one will be received on the other.
## communicate between container and world:
#1 enable IP forward: start docker server with '--ip-forward=true' (this adds 'echo 1 > /proc/sys/net/ipv4/ip_forward')
#2 iptables must allow: start docker server with '--iptables=true' (default) to append forwarding rules to DOCKER filter chain

## communication between containers
#1 by default docker will attach all containers to 'docker0' bridge interface
#2 iptables must allow: start docker server with '--iptables=true --icc=true' (default) to add rule to FORWARD chain with blanket ACCEPT policy

# when set '--icc=false' docker adds DROP rule and we should use '--link=CONTAINER_NAME_or_ID:ALIAS' to communicate (and you will see a ACCEPTS rule overriding DROP policy)
$ sudo iptables -L -n
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0
DROP       all  --  0.0.0.0/0            0.0.0.0/0
Chain DOCKER (1 references)
target     prot opt source               destination
ACCEPT     tcp  --  172.17.0.2           172.17.0.3           tcp spt:80
ACCEPT     tcp  --  172.17.0.3           172.17.0.2           tcp dpt:80

## binding container ports to the host
# by default outside world cannot connect to containers, either
'-P,--publish-all=true' to EXPOSE all ports, or
'-p,--publish=SPEC' to explicit which ports to expose, e.g.: using '-p 80:80'
$ iptables -t nat -L -n
Chain DOCKER (2 references)
target     prot opt source               destination
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:80 to:172.17.0.2:80
  • Weave
    Weave creates a virtual network that connects Docker containers deployed across multiple hosts.
## install
$ curl -L https://github.com/zettio/weave/releases/download/latest_release/weave > /usr/local/bin/weave
$ chmod a+x /usr/local/bin/weave

## launch weave router on each host
# on the first host
$(hostA) sudo weave launch
# on other hosts
$(hostB) sudo weave launch <first-host-IP-address>
# check status
$ sudo weave status

## interconnect docker containers across multiple hosts
# e.g.: create ubuntu container and attach to 10.0.0.0/24 with 10.0.0.1
$(hostA) sudo weave run 10.0.0.1/24 -t -i ubuntu
# create another container on same network with 10.0.0.2
$(hostB) sudo weave run 10.0.0.2/24 -t -i ubuntu

## create multiple virtual networks
# detach from a nrtwork and attach to another
$ sudo weave run 10.0.0.2/24 -t -i ubuntu
$ sudo weave attach 10.10.0.2/24 <container-id>

## integrate weave networks with host network
# assign 10.0.0.100 to hostA to it can connect to 10.0.0.0/24 network
$(hostA) sudo weave expose 10.0.0.100/24

from weave@xmodulo

Volumes

  • Using data volume
    Specially-designated directory within one or more containers that bypasses the Union File System. Can be shared and reused between containers, changes will not be included when you update an image, persist until no containers use them
'-v,--volume=[]' bind mount a volume from the host '-v /host:/container', from Docker '-v /container'

$ sudo docker run -d -P --name web -v /webapp IMAGE COMMAND
# mount a directory from your host into a container
... -v /src/webapp:/opt/webapp
# mount as read-only
... -v /src/webapp:/opt/webapp:ro
# mount a host file as a data volume
... -v $HOME/.bash_history:/.bash_history
  • Using named data volume container
    If you want to share between containers, or want to use from non-persistent containers, it’s best to create a named Data Volume Container, and then to mount the data from it.
'--volumes-from=[]' Mount volumes from the specified container(s)

# create named container with volume to share
$ sudo docker run -d -v /dbdata --name dbdata training/postgres

# use '--volumes-from' to mount the volume in another container
$ sudo docker run -d --volumes-from dbdata --name db1 training/postgres
# chain by mounting volumes from another container
$ sudo docker run -d --name db3 --volumes-from db1 training/postgres
$ sudo docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata

# create new container and restore/untar
$ sudo docker run -v /dbdata --name dbdata2 ubuntu /bin/bash
$ sudo docker run --volumes-from dbdata2 -v $(pwd):/backup busybox tar xvf /backup/backup.tar

Machine

Machine makes it really easy to create Docker hosts on your computer, on cloud providers and inside your own data center. It creates servers, installs Docker on them, then configures the Docker client to talk to them.

  • With a local VM (virtualbox)
## install
$ curl -L https://github.com/docker/machine/releases/download/v0.1.0/docker-compose-'uname -s'-'uname -m' > /usr/local/bin/docker-machine
$ chmod +x /usr/local/bin/docker-machine

## creates a VM in virtualbox running boot2docker with docker daemon
$ docker-machine create --driver virtualbox dev
$ docker-machine env dev
export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH=/home/ehazlett/.docker/machines/.client
export DOCKER_HOST=tcp://192.168.99.100:2376
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL
dev * virtualbox Running tcp://192.168.99.100:2376

## run docker commands as usual
$ $(docker-machine config dev)
$ docker run busybox echo hello world

## stop/start machine
$ docker-machine stop|start
  • With a cloud provider (digitalocean, aws, gce, azure)
## go to digitalocean's admin, generate a new write token and grab hex string
$ docker-machine create --driver digitalocean --digitalocean-access-token 0ab77166d407f479c6701652cee3a46830fef88b8199722b87821621736ab2d4 staging
$ docker-machine active dev
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL
dev virtualbox Running tcp://192.168.99.103:2376
staging * digitalocean Running tcp://104.236.50.118:2376
  • Add host without driver. Used an alias for an existing host so you don’t have to type out the URL every time you run a Docker command
$ docker-machine create --url=tcp://50.134.234.20:2376 custombox
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL
custombox * none Running tcp://50.134.234.20:2376
  • With Docker Swarm. Used to provision Swarm clusters
## generate swarm token
$ docker-machine create -d virtualbox local
$ $(docker-machine env local)
$ docker run swarm create
1257e0f0bbb499b5cd04b4c9bdb2dab3

# create swarm master
$ docker-machine create -d virtualbox --swarm --swarm-master 
--swarm-discovery token:// swarm-master

# create more swarm nodes
$ docker-machine create -d virtualbox --swarm 
--swarm-discovery token:// swarm-node-00

# to connect to the swarm master
$ $(docker-machine env --swarm swarm-master)

# to query
$ docker info
Containers: 1
Nodes: 1
swarm-master: 192.168.99.100:2376
  Containers: 2

Compose

Compose is a tool for defining and running complex applications with Docker. With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running. Its replacement for fig.

## install
$ curl -L https://github.com/docker/compose/releases/download/1.1.0/docker-compose-'uname -s'-'uname -m' > /usr/local/bin/docker-compose
$ chmod +x /usr/local/bin/docker-compose

## create application
$ cat app.py
from flask import Flask
from redis import Redis
import os
app = Flask(__name__)
redis = Redis(host='redis', port=6379)
@app.route('/')
def hello():
redis.incr('hits')
return 'Hello World! I have been seen %s times.' % redis.get('hits')

if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True)
$ cat requirements.txt
flask
redis

## create image with Dockerfile
FROM python:2.7
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt

## define services in docker-compose.yml
web:
build: .
command: python app.py
ports: # forward the exposed port on the container to port the host machine
- "5000:5000"
volumes: # mount current directory inside container to avoid rebuild image
- .:/code
links: # connect up the Redis service
- redis
redis:
image: redis

## build and run (in daemon background)
$ docker-compose up -d
Starting composetest_redis_1...
Starting composetest_web_1...
$ docker-compose ps
...

$ docker-compose stop

Swarm

Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual host. It ships with a simple scheduling backend out of the box, and as initial development settles, an API will develop to enable pluggable backends.

Each swarm node will run a swarm node agent which will register the referenced Docker daemon, and will then monitor it, updating the discovery backend to its status.

# create a cluster
$ docker run --rm swarm create
6856663cdefdec325839a4b7e1de38e8 # <- this is your unique

# on each of your nodes, start the swarm agent
# doesn't have to be public (eg. 192.168.0.X),
# as long as the swarm manager can access it.
$ docker run -d swarm join --addr= token://

# start the manager on any machine or your laptop
$ docker run -d -p :2375 swarm manage token://

# use the regular docker cli
$ docker -H tcp:// info
$ docker -H tcp:// run ...
$ docker -H tcp:// ps
$ docker -H tcp:// logs ...
...

# list nodes in your cluster
$ docker run --rm swarm list token://

References

How to build Cordova mobile app (using nodejs)

Apache Cordova/PhoneGap@wiki is a platform for building native mobile applications using HTML, CSS and JavaScript.

  • Install SDKs
# debian/ubuntu, see http://askubuntu.com/questions/318246/complete-installation-guide-for-android-sdk-adt-bundle-on-ubuntu
$ sudo apt-get install nodejs ant openjdk-8-jdk (or openjdk-7-jdk)
$ cat ~/.bash_cordova
#!/bin/bash
export JAVA_HOME=/usr/lib/jvm/default-java/
# export ANT_HOME=/usr/bin
# download and unzip android-sdk from http://developer.android.com/sdk/index.html, run "Android SDK Manager" to install platforms and add to path
export ANDROID_SDK="~/android-sdk"
export PATH=$PATH:$JAVA_HOME/bin:$ANDROID_SDK/platform-tools:$ANDROID_SDK/tools

# cygwin using chocolatery
$ cinst node ant java.jdk (doesnt work due to license)
$ cat ~/.bash_cordova
#!/bin/bash
export JAVA_HOME="C:\Program Files\Java\jdk1.7.0_51"
export ANT_HOME="C:\apache-ant-1.9.3"
# download and unzip android-sdk from http://developer.android.com/sdk/index.html, run "Android SDK Manager" to install platforms and add to path
export ANDROID_SDK="$LOCALAPPDATA\Android\android-sdk"
export PATH=$PATH:`cygpath $JAVA_HOME`/bin:`cygpath $ANT_HOME`/bin:`cygpath $ANDROID_SDK`/platform-tools:`cygpath $ANDROID_SDK`/tools

from Cordova android platform guide

$ mkdir myapp && cd myapp
$ yo cordova (yes to webapp generator)
$ cd yeoman
$ cordova platform add android
$ grunt run (connect phone, enable usb debug and MTP storage)
# other commands
$ grunt build (both buildweb and cordova-build) or grunt emulate (needs avd device)
# note: from source control, keep yeoman/ only, mkdir {platforms,plugins,www}, cp yeoman/app/config.xml www/ or grunt build, add plugins and platforms
# e.g.: pre-existing web project using grunt
$ npm install -g generator-angular|generator-webapp
$ mkdir myapp
$ yo angular|webapp
$ bower install && npm install

# add npm's cordova
$ npm install cordova --save-dev
$ (add  to index.html)
# edit Gruntfile.js
var cordova = require('cordova');
var cordovaConfig = { platform: grunt.option('platform') };
yeoman: { ..., cordova: 'www'
clean: { dist: { files: [{ src: [ ...., '/*'
copy: { ..., cordova: { expand: true, cwd: '', dest: '', src: '**' }
grunt.registerTask('cordova-prepare', 'Prepare the native application', function() {
    var done = this.async();
    if (cordovaConfig.platform === null) { cordova.prepare(done); } 
    else { cordova.prepare(cordovaConfig.platform, done); }
});
grunt.registerTask('cordova-build', 'Build the native application', function() {
    var done = this.async();
    if (cordovaConfig.platform === null) { cordova.build(done); } 
    else { cordova.build(cordovaConfig.platform, done); }
});
grunt.registerTask('cordova-emulate', 'Emulate the application', function(){
    var done = this.async();
    if (cordovaConfig.platform === null) { cordova.emulate(done); } 
    else { cordova.emulate(cordovaConfig.platform, done); }
});
grunt.registerTask('cordova-run', 'Run the application on a device', function() {
    var done = this.async();
    if (cordovaConfig.platform === null) { cordova.run(done); } 
    else { cordova.run(cordovaConfig.platform, done); }
});
# build build cordova project
$ grunt build copy:cordova cordova-build

note 'cordova build' is shofthand for `cordova prepare && cordova compile`
use `grunt cordova-run` to run (assuming called cordova-build previously) on installed devices. Needs MTS (storage) and USB debug enabled
use `grunt cordova-emulate` to run or emulator (needs `android create avd --name  --target <targetID`)  

from Mobile Apps with Phonegap and Yeoman

$ cordova plugin add https://github.com/mkuklis/phonegap-websocket
$ cordova plugin add https://github.com/alongubkin/phonertc.git
$ cordova build android
$ cordova run android

Useful CLI tools for Linux system devops/admins and developers

Here is a list of commands used by sysadmins and developers to do just about anything from the CLI. From backup, to network configuration, compression, compilation, debugging, package management, process management, text editing, …

Backup/Copy Tools
* dd
* duplicity
* rsync/scp
* rdiff-backup
* rsnapshot
* unison

Command Interpreters (Shells)
* bash
* csh
* dash/ash
* mc
* zsh

Compiler Tools
* autoconf/automake
* clang
* gcc/ld/as/gdb
* make

Compression and Archiving Tools
* 7z
* bzip2
* gzip
* pax
* tar
* zip
* xz

Daemon Tools
* service
* systemctl/systemd doc/systemd arch/systemd debian/systemd cheatsheet
* upstart

Download Tools
* axel
* curl
* lftp
* wget

File Managers
* mc
* ranger
* tree
* vifm

Hardware Tools
* inxi
* lspci
* lshw

Logging
* ccze
* logrotate
* rsyslog

Network active monitoring
* arp-scan
* iperf
* netcat/socat
* ping/tcpping
* sprobe
* tracepath/traceroute

Network configuration
* dig/nslookup
* ip/route/ifconfig
* ipcalc/sipcalc/whatmask
* tc

Network packet sniffing
* dhcpdump
* dsniff
* iptraf
* httpry
* ngrep
* p0f
* pktstat
* snort
* tcpdump
* tshark

Network passive flow/stats
* bmon
* iftop
* lsof
* nethogs
* netstat/ss
* speedometer
* speedtest-cli
* tcptrack
* vnstat

Online Resources
* commandlinefu
* free programming books

Package Management Tools
* apt-get/aptitude/dpkg
* yum/rpm
* pacman
* zypper
* pkg(BSD)

Performance Monitoring Tools
* dstat
* iotop
* iostat
* httpry
* nethogs
* ngxtop
* ps
* sar
* smem
* top/htop/atop

Processor Management Tools
* kill/killall/pkill
* nice/renice
* pgrep
* taskset

Productivity Tools
* byobu
* cal
* cheat
* cmus
* fortune
* mutt
* pv
* screen
* screenfetch
* ssh
* tmux
* weather/WMO/ICAO
* weechat/irssi

Security Tools
* getfacl/setfacl
* lynis
* nmap
* iptables
* passwd/apg

Source Control
* git
* hg
* svn

Storage Tools
* lvm
* mount

Text Processing Tools
* awk
* diff/patch
* grep/ack
* tail/multitail/ztail
* sed

Text Editors
* emacs
* nano
* vim

from Useful CLI tools for Linux system admins

How to upgrade a VMware ESXi server (using esxcli)

VMware ESXi is a type-1 hypervisor (native or bare-metal hypervisors), and thus includes the kernel. To upgrade to latest version we can use esxcli, see Methods for upgrading to ESXi 5.5 (2058352) for all the upgrade methods.

# enable ssh in ESXi and open a ssh root session

# shut down all VMs running

# enter in maintenance mode
$ vim-cmd /hostsvc/maintenance_mode_enter

# correct firewall rules 
$ esxcli network firewall ruleset set -e true -r httpClient

# list all available "-standard" versions
$ esxcli software sources profile list -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml | grep standard

# choose one/latest, update (dont install, update!) and reboot
$ esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-5.5.0-20150204001-standard
$ reboot

# reopen ssh and exit from maintanance mode
$ vim-cmd /hostsvc/maintenance_mode_exit

from Der Flounder Blog

How to stream/pipe a file to the browser from CLI (using eventsource or websockets)

It may be useful to stream/pipe a file content (a “tail -f”) to the browser, maybe a log.

Using Server-sent events and netcat@wiki

# on the server, install netcat
$ sudo apt-get install netcat | sudo apt-get install netcat-openbsd

# and stream a file
$ (echo -e "HTTP/1.1 200 OK\nAccess-Control-Allow-Origin: *\nContent-type: text/event-stream\n" && tail -f file | sed -u -e 's/^/data: /;s/$/\n/') | nc -l 1234

# on the browser, run js snippet
new EventSource("http://localhost:1234/").onmessage = function(e) { console.log(e.data); };

See sse@caniuse and sse@html5rocks

Using WebSocket and websocketd (static binary written in go)

# on the server, install websocketd, see https://github.com/joewalnes/websocketd/wiki/Download-and-install

# and stream a file
$ websocketd --port 1234 tail -f /path/to/file

# on the browser, run js snippet
new WebSocket("ws://localhost:1234/").onmessage = function(e) { console.log(e.data); };
# or from the cli, using http://einaros.github.io/ws/
$ npm install -g ws ; wscat -c ws://localhost -p 1234

See ws@caniuse and ws@html5rocks