Month: February 2015

Using systemd in Linux (services, journal, locale, network, container, …)

systemd is a suite of basic building blocks for a Linux system. It provides a system and service manager that runs as PID 1 and starts the rest of the system.

Uses socket and D-Bus activation for starting services, offers on-demand starting of daemons, keeps track of processes using Linux control groups, supports snapshotting and restoring of the system state, maintains mount and automount points and implements an elaborate transactional dependency-based service control logic. Supports SysV and LSB init scripts and works as a replacement for sysvinit.

Other parts include a logging daemon, utilities to control basic system configuration like the hostname, date, locale, maintain a list of logged-in users and running containers and virtual machines, system accounts, runtime directories and settings, and daemons to manage simple network configuration, network time synchronization, log forwarding, and name resolution.

systemd is under development, see changelog/news

$ /usr/lib/systemd/systemd --version
systemd 218
$ man systemd.index

from systemd/systemd@wiki.

  1. System and service manager
  2. Journal (logging)
  3. Configuration files (hostname, locale, timedate, users, network, mounts, …)
  4. Filesystem hierarchy
  5. Containers

System and service manager

systemctl controls the systemd system and service manager /usr/lib/systemd/systemd. When run as first process on boot (as PID 1), it acts as init system that brings up and maintains userspace services.

## installation (usually done by distros)
# boot runs init as first process (w/ PID 1) on boot
$ ln -s /usr/lib/systemd/systemd /sbin/init
# use grub2-mkconfig to generate init option
# (usually isn't needed if using initramfs generated by dracut with systemd)
$ cat /etc/default/grub
GRUB_CMDLINE_LINUX="init=/usr/lib/systemd/systemd"
# to diagnosing boot problems to 'dmesg', boot with
systemd.log_level=debug systemd.log_target=kmsg log_buf_len=1M

## usage
systemctl [OPTIONS...] COMMAND [NAME...]
NAME is 'unit-name.?service?', 'unit-name.socket' or '/unit-name' for '.mount,.device'
template units use 'name@string.service' for 'name@.service' where '%i' is 'string'

# list unit files installed from '{/etc,/run,/usr/lib}/systemd/system/*'
$ systemctl list-unit-files
# list units status, LOAD (properly loaded), ACTIVE/SUB (high-level/low-level activation state)
$ systemctl list-units
# reload systemd, scanning for new or changed units
$ systemctl daemon-reload
# list dependencies
$ systemctl list-dependencies
# show unit file
$ systemctl cat
# list services that failed to activete/start
$ systemctl --state=failed

# sysvinit/service equivalents
$ systemctl start,stop,status,show,restart,reload
# reload if supported, restart otherwise
$ systemctl reload-or-restart
# restart if running, nothing otherwise; same as condrestart
$ systemctl try-restart
# reload if supported, try-restart otherwise; same as force-reload
$ systemctl reload-or-try-restart
# reload if supported, restart otherwise
$ systemctl reload-or-restart
# send a signal to one or more processes of the unit
$ systemctl --signal SIGTERM unit-name kill

# control access to system resources; '--runtime' for non-persistent between reboots
$ systemctl set-property CPUShares=512 MemoryLimit=1G

# remote control; '-H user@host'
$ systemctl -H user@host COMMAND

# SysVinit/chkconfig equivalents
$ systemctl enable,disable,is-enabled
$ prohibits all kinds of activation of the unit, including enablement and manual activation
$ systemctl mask,unmask
  • systemd.unit a unit configuration file encodes information about a service, a socket, a device, a mount point, an automount point, a swap file or partition, a start-up target, a watched file system path, a timer controlled and supervised by systemd(1), a temporary system state snapshot, a resource management slice or a group of externally created processes.
$ ls {/etc,/run,/usr/lib}/systemd/system/*
# find overridden configuration files
$ systemd-delta systemd/system
# '.include' is no longer supported, use '<uni-name>.type.d/' to include '.conf' files, to be parsed after file it self

$ man systemd.directives
[Unit]
'Before=, After=' Ordering dependencies between units
'Requires=' Units to also activate. Also deactivates when others deactivate (or fail to activate)
'Wants=' Same as 'Requires=' but doesn't deactivate on failure
'Conflicts=' Configures negative requirement dependencies. Independent of 'After=,Before='
'OnFailure=' Units to activate on 'failed'
'ConditionFirstBoot=yes' Used to populate '/etc' on first boot after factory reset
'ConditionPathExists=/path' File existence condition is checked before a unit is started
'AssertXXX=' Same as 'ConditionXXX' but sets unit state to 'failed'

[Install] called exclusively on enable/disable
'RequiredBy=,WantedBy=' On enable adds 'Requires=,Wants=' and creates symblinks '{.requires,.wants}/' to others
'Also=' Additional units to install/uninstall
[Service] used by .service units
'Type=simple' Expects 'ExecStart=' is the main process and doesn't exit
'Type=forking' Expects process in 'ExecStart=' forks
'Type=oneshot' Same as 'simple' but expects 'ExecStart=' to exit

'ExecStart=cmd' Commands and arguments executed when this service is started.
Multiple 'ExecStart=' are allowed but only on 'oneshot'
Use '${var} for environment variables
See '%i' for instance name, see 'man systemd.unit' for all specifiers
Prefix with '-' to ignore exit code (non-zero is error)
'ExecStartPre=, ExecStartPost=' Extra commands executed before/after 'ExecStart='
'ExecReload=' Command executed on reload. Supports multiple 'ExecReload='
Reload command should wait for completion, so '/bin/kill -HUP $MAINPID' isn't recommended
'ExecStop==' Command executed on stop. Defaults to 'SIGKILL' to control group, see 'systemd.kill'
'ExecStopPost' Command executed after stop

'Restart=no,allways' Never,Allways restart on process exit, killed/signaled or timeout 'RestartSec='
'Restart=on-success' Restart on clean exit code or clean signal, see 'SuccessExitStatus='
'Restart=on-failure' Restart on unclean exit code, unclean signals or timeout
'Restart=on-abnormal' Restart on unclean signal or timeout
'Restart=on-abort' Restart on unclean signal only
'RestartSec=,TimeoutStartSec=,TimeoutStopSec=' Time to wait before restart,start,stop
'SuccessExitStatus=' Defaults to '0 SIGHUP SIGINT SIGTERM SIGPIPE'
'StartLimitInterval=, StartLimitBurst=' Start rate limiting, defaults to 5 times within 10 secs

$ systemctl cat sshd
# /usr/lib64/systemd/system/sshd.service
[Unit]
Description=OpenSSH server daemon
After=syslog.target network.target auditd.service
[Service]
ExecStartPre=/usr/bin/ssh-keygen -A
ExecStart=/usr/sbin/sshd -D -e
ExecReload=/bin/kill -HUP $MAINPID
[Install]
WantedBy=multi-user.target

from systemd for Administrators, Part III and SystemdForUpstartUsers@ubuntu

  • systemd.target/systemd.special target units do not offer any additional functionality on top of the generic functionality provided by units. They exist merely to group units via dependencies (useful as boot targets), and to establish standardized names for synchronization points used in dependencies between units (e.g.: After=network.target and WantedBy=multi-user.target).
# change runlevels/targets; halt/poweroff=0, rescue=1, multi-user/default=2,3,4, graphical=5, reboot=6, emergency
$ systemctl multi-user,halt,poweroff,rescue,multi-user,graphical,reboot,emergency
# starts given unit and stops all others, used in .target units, similar to changing runlevel
$ systemctl isolate .target
# get/set default target
$ systemctl get-default,set-default .target

# suspend to RAM; hibernate to disk swap; hybrid-sleep suspends if battery isnt depleted and hibernates otherwise
$ systemctl suspend,hibernate,hybrid-sleep
[Service,Socket,Mount,Swap]
'WorkingDirectory=,RootDirectory=' Working and root directory for executed processes
'User=,Group=' User,group that the processes are executed
'Environment=,EnvironmentFile=' Environment variables, "VAR1=word1 word2" VAR2=word3", see 'systemctl show-environment'

'StandardInput=' null(default), tty from 'TTYPath='' or socket
'StandardOutput=,StandardError=' inherit, null, tty, journal, syslog, kmsg, journal+console, syslog+console, kmsg+console or socket
'TTYPath=' Terminal device, defaults to '/dev/console'
'SyslogIdentifier=' Prefix, defaults to process name
'SyslogFacility=' Syslog facility, see syslog(3), defaults to 'daemon'

'Nice=,CPUAffinity=' Nice level (scheduling priority) and CPU affinity
'LimitXXX=,' Sets soft and hard limits, see setrlimit(2)
'CapabilityBoundingSet=' Controls see capabilities(7), e.g.: CAP_SYS_PTRACE,

'PrivateNetwork=,' Process can access only loopback devices
'PrivateTmp=' '/tmp,/var/tmp' are private and isolated from the host system's
'ProtectSystem=' If true mounts '/usr' in ro, if 'full' also mounts '/etc' ro
'ReadWriteDirectories=,ReadOnlyDirectories=,InaccessibleDirectories=' Limit access to the file system hierarchy

from systemd for Administrators, Part XII

[Service,Socket,Mount,Swap,Slice,Scope]
'CPUShares=weight,StartupCPUShares=weight' Assign CPU time share weight
'CPUQuota=%' Assign CPU quota, see scheduler/sched-design-CFS.txt
'MemoryLimit=bytes' Limits max memory usage 'K,M,G,T', see cgroups/memory.txt
'BlockIOWeight=weight,StartupBlockIOWeight=weight' Default overall block IO weight (10-1000), see cgroups/blkio-controller.txt
'BlockIODeviceWeight=device weight' Per-device overall block IO weight
'BlockIOReadBandwidth=device bytes/sec, BlockIOWriteBandwidth=device bytes/sec' Per-device overall block IO bandwidth limit
'DeviceAllow=dev r|w|m' Control access to specific device nodes, see cgroups/devices.txt
'Slice=' Name of the slice unit to place the unit in.

$ systemctl set-property httpd.service CPUShares=500 MemoryLimit=500M
# or using slices
$ cat /etc/systemd/system/limits.slice
[Unit]
Description=Limited resources Slice
DefaultDependencies=no
Before=slices.target
[Slice]
CPUShares=512
MemoryLimit=2G
$ cat /etc/systemd/system/httpd.service.d/limits.conf
[Service]
Slice=limits.slice

from systemd for Administrators, Part XVIII

  • systemd.socket bind service activation to incoming socket connection, for socket-based activation. A service capable of socket activation must be able to receive its preinitialized sockets from systemd, instead of creating them internally. For most services this requires (minimal) patching.
#include "sd-daemon.h"
...
int fd, n;
n = sd_listen_fds(0); /* returns how many file descriptors are passed */
if (n > 1) { fprintf(stderr, "Too many file descriptors received.n"); exit(1);
} else if (n == 1)
  fd = SD_LISTEN_FDS_START + 0;
else {
  /* non-socket activated env, continue as before */
}

from Socket Activation

# for each .socket file, a matching .service file must exist
[Socket] # used by .socket
'ListenStream=,ListenDatagram=,ListenSequentialPacket=' Address to listen '?ip:?port'
'Service=' Overrides service unit to activate on incomming traffic, defaults to .service with same name
'ExecStartPre=, ExecStartPost=,ExecStopPre=, ExecStopPost=' Commands execute around start and stop listening

$ systemctl cat sshd.socket
# /usr/lib64/systemd/system/sshd.socket
[Unit]
Description=OpenSSH Server Socket
Conflicts=sshd.service
[Socket]
ListenStream=22
Accept=yes
[Install]
WantedBy=sockets.target

from DaemonSocketActivation and systemd-crontab-generator/systemd-cron

# for each .timer file, a matching .service file must exist
[Timer] # used by .timer
'OnActiveSec=,OnBootSec=,OnStartupSec=,OnUnitActiveSec=,OnUnitInactiveSec=' Monotonic timers relative to different starting points
'OnCalendar=' Realtime/wallclock timers e.g.: 'Thu,Fri 2012-*-1,5 11:12:13'
'AccuracySec=' Accuracy the timer shall elapse with, used to distribute wake-up
'Unit=' Overrides unit activated when timer elapses
'Persistent=' Activate service if it had to be when timer was inactive

$ cat /etc/systemd/system/foo.timer
[Unit]
Description=Run foo weekly (realtime/wallclock)
[Timer]
OnCalendar=weekly
Persistent=true # starts immediately, if it missed the last start time
[Install]
WantedBy=timers.target

$ cat /etc/systemd/system/foo.timer
[Unit]
Description=Run foo weekly and 15mins after boot
[Timer]
OnBootSec=15min
OnUnitActiveSec=1w
[Install]
WantedBy=timers.target

$ systemctl list-timers

from systemd/timers@arch

  • systemd.path bind service activation to file system changes, uses inotify(7).
# for each .path file, a matching .service file must exist
[Path] # used by .path
'PathExists=,PathExistsGlob=,PathChanged=,PathModified=,DirectoryNotEmpty=' Paths to monitor for certain changes, or existence
'Unit=' Overrides unit activated when any of the configured paths changes
'MakeDirectory=,DirectoryMode=' Create directories to watch before watching

$ cat /etc/systemd/system/foo.path
[Path]
PathExistsGlob=/var/crash/*.crash
Unit=foo.service
  • systemd can also manage services under the user’s control with a per-user systemd instance. On the first login of a user, systemd automatically launches a systemd –user instance, responsible to manage user services. User units should be placed in {~/.config,/etc,/usr/lib}/systemd/user/.
## installation (usually done by distros)
$ systemctl --user status

## example
$ cat $HOME/.config/systemd/user/mpd.service
[Unit]
Description=Music Player Daemon
[Service]
ExecStart=/usr/bin/mpd --no-daemon
[Install]
WantedBy=default.target

from systemd/user@arch

  • systemd-cgls/systemd-cgtop show control group contents (processes) and their resource usage. control groups are a way to hierarchally group and label processes, and a way to apply resource limits to these groups.
# show which service owns which processes, same as 'systemd-cgls'
$ ps xawf -eo pid,user,cgroup,args

$ systemd-cgtop

from systemd for Administrators, Part II

# list the started unit files, sorted by time each of them took to start up
$ systemd-analyze blame
# show which units are in the critical points in the startup chain
$ systemd-analyze critical-chain

# same as bootchart
$ systemd-analyze plot > plot.svg
# more detailed version of 'systemd-analyze plot', add this to to kernel line
initcall_debug printk.time=y init=/usr/lib/systemd/systemd-bootchart

from improve boot performance@arch and systemd for Administrators, Part VII

Journal (logging)

systemd has its own logging system called the journal; therefore, running a syslog daemon is no longer required. It captures Syslog messages, Kernel log messages, initial RAM disk and early boot messages as well as messages written to STDOUT/STDERR of all services, indexes them and makes this available to the user. It can be used in parallel, or in place of a traditional syslog daemon, such as rsyslog or syslog-ng.

journalctl used to query the contents of systemd journal written by systemd-journald.service (running /usr/lib/systemd/systemd-journald). systemd-journal-gatewayd.service (running /usr/lib/systemd/systemd-journal-gatewayd) is HTTP server for journal events.

## configuration
# 'journalctl' controls 'systemd-journald.service' configured in '/etc/systemd/journald.conf'
$ man journal.conf
'Storage=volatile' Stored in memory only, below '/run/log/journal'
'Storage=persistent' Stored in disk '/var/log/journal' and fallback to memory
'Storage=auto' Same as 'persistent' but '/var/log/journal' isn't created if needed
'Storage=none' Turns off all storage
'Compress=' Enable compression before written to disk
'RateLimitInterval=, RateLimitBurst=' Defaults to 1000 messages in 30s
'SystemMaxUse=%, SystemKeepFree=%, SystemMaxFileSize=bytes' Enforce size limits on the journal files stored
RuntimeMaxUse=, RuntimeKeepFree=, RuntimeMaxFileSize=' Same as above but for volatime/runtime storage
'MaxFileSec=,MaxRetentionSec=' Max time to store entries before rotating/deleting, time-based
'SyncIntervalSec=' Timeout before sync to disk, defaults to 5mins; CRIT, ALERT,EMERG are allways sync

'ForwardToSyslog=, ForwardToKMsg=, ForwardToConsole=, ForwardToWall=' Forward to traditional syslog daemons
'MaxLevelStore= MaxLevelSyslog=,MaxLevelKMsg=,MaxLevelConsole=,MaxLevelWall=' Max log level of stored messages
'TTYPath=' Console tty to use if ForwardToConsole=yes, defaults to '/dev/console'

## usage
journalctl [OPTIONS...] [MATCHES...]
where MATCHES is FIELD=VALUE, see man systemd.journal-fields

# show the journal, starting with the oldest
$ journalctl
# to see all logs add user to 'adm' group
$ usermod -a -G adm
# journal is stored binary, but content of messages isn't modified
$ strings /var/log/journal/*/system.journal | grep -i message

# '-f,--follow' live view, same as 'tail -f'
$ journalctl -f
# '-b,--boot id' show only after id boot, empty id is last
$ journalctl -b
# '-p,--priority' filter by priority 0/emerg...7/debug
$ journalctl -b -p err
# '--since,--until' filter by date
$ journalctl --since=yesterday
$ journalctl --since=2012-10-15 --until="2011-10-16 23:59:59"
# '-u,--unit' filter by unit
$ journalctl -u httpd --since=00:00 --until=9:30
# filter by field match
$ journalctl /usr/sbin/vpnc /usr/sbin/dhclient
# '-o,--output' output format 'short,verbose,export,json,cat'
# '-n, --lines=' show last n lines only
$ journalctl -o verbose -n
# '-F,--field' show all possible field values
$ journalctl -F _SYSTEMD_UNIT

# retrieve events from this boot from local journal in Journal Export Format
$ systemctl enable systemd-journal-gatewayd.socket
$ curl --silent -H'Accept: application/vnd.fdo.journal' 'http://localhost:19531/entries?boot'

from systemd for Administrators, Part XVII

Systemd journal (216) can be configured to forward events to a remote server. Entries are forwarded including full metadata, and are stored in normal journal files, identically to locally generated logs.

Two new daemons are added as part of the systemd package: 1) systemd-journal-remote accepts messages in the Journal Export Format and stores them locally, and 2) systemd-journal-upload is the journal client which exports journal messages and uploads them over the network.

# copy local journal events to a different journal directory
$ journalctl -o export | systemd-journal-remote -o /tmp/dir -

# retrieve events from a remote 'systemd-journal-gatewayd' instance and store in '/var/log/journal/some.host/remote-some.host.journal'
$ systemd-journal-remote --url http://some.host:19531/

Use coredumpctl to retrieve coredumps from the journal. systemd-coredump is used to store core dumps (generated when user program receives fatal signal) in an external file /var/lib/systemd/coredump or journal.

## configuration (usually done by distros)
# kernel configured to call 'systemd-coredump'
$ cat /usr/lib/sysctl.d/50-coredump.conf
kernel.core_pattern=|/usr/lib/systemd/systemd-coredump %p %u %g %s %t %e
$ cat /proc/sys/kernel/core_pattern
|/usr/lib/systemd/systemd-coredump %p %u %g %s %t %e

# create file {/etc,/run,/usr/lib}/coredump.d/.conf, defaults to '/etc/systemd/coredump.conf'
$ cat /etc/systemd/coredump.conf
'Storage=' Where to store 'none,external,journal,both'
'ExternalSizeMax=,JournalSizeMax=,ProcessSizeMax=' Max size of core to save
'MaxUse=%, KeepFree=%' Limits disk usage of external storage

## usage
coredumpctl [OPTIONS...] {COMMAND} [PID|COMM|EXE|MATCH...]

# list cores in journal
$ coredumpctl list
# filtered by PID or program name
$ coredumpctl list foo
# show core info
$ coredumpctl info 6654
$ extract/dump core
$ coredumpctl -o bar.coredump dump /usr/bin/bar

from coredump@arch

Journal collects all data logged via syslog, kernel logged via printk as well as stdout/stderr of any service. It also as a native API for logging, sd_journal_print with bindings for other languages (erlang, go, python, ruby, …)

## using syslog (it basically writes to /dev/log)
$ cat test-journal-submit.c
#include <syslog.h>
syslog(LOG_NOTICE, "Hello World");
$ journalctl -o json-pretty
    "PRIORITY" : "5",
    "_PID" : "3068",
    "MESSAGE" : "Hello World!",
    "_SOURCE_REALTIME_TIMESTAMP" : "1351126905014938"

## using printf("<PRIORITY>MSG")
#include <stdio.h>
#define PREFIX_NOTICE "<5>"
printf(PREFIX_NOTICE "Hello Worldn");

## native sd_journal_print/sd_journal_send
#include <systemd/sd-journal.h>
sd_journal_print(LOG_NOTICE, "Hello World");
sd_journal_send("MESSAGE=Hello World!",
  "MESSAGE_ID=52fb62f99e2c49d89cfbf9d6de5e3555", "PRIORITY=5", "XPTO=XXX"
  NULL);

from systemd for Developers III

Configuration Files

To unify systemd introduces new configuration files as primary source of configuration, and only per-distro configurtion as a fallback. See systemd for Administrators, Part VIII

  • binfmt.d registration of additional binary formats for systems like Java, Mono and WINE. At boot, systemd-binfmt.service reads configuration files from the above directories to register in the kernel additional binary formats for executables.
# see binfmt_misc.txt
:name:type:offset:magic:mask:interpreter:flags

# start WINE on Windows executables
$ cat /etc/binfmt.d/wine.conf
:DOSWin:M::MZ::/usr/bin/wine:
  • hostnamectl used to query and change the system hostname. Its the client for systemd-hostnamed.service. It distinguishes three different hostnames: the high-level “pretty” hostname (stored in /etc/machine-info) which might include all kinds of special characters (e.g. “Lennart’s Laptop”), the static hostname (stored in /etc/hostname/) which is used to initialize the kernel hostname at boot (e.g. “lennarts-laptop”), and the transient hostname which is a default received from network configuration (not used if a valid static hostname is defined).
# show current settings
$ hostnamectl ?status?

# set hostname
$ hostnamectl set-hostname  ?-static, --transient, --pretty?
$ cat /etc/{hostname,machine-info}
  • localectl used to query and change the system locale and keyboard layout settings. Its a client for the systemd-localed.service that uses /etc/locale.conf. And for systemd-vconsole-setup.service, an early service that uses /etc/vconsole.conf to configure the virtual console (i.e. keyboard mapping and console font).
# show current settings
$ localectl ?status? ?-h,--host=remote-host?
$ cat /etc/vconsole.conf
KEYMAP=de-latin1
FONT=latarcyrheb-sun16
$ cat /etc/locale.conf
LANG=de_DE.UTF-8
LC_MESSAGES=en_US.UTF-8

# change the system locale
$ localectl set-locale LANG=
# change the virtual console keymap:
$ localectl set-keymap
# set the X11 layout
$ localectl set-x11-keymap
# show current settings
$ timedatectl ?status? ?-h,--host=remote-host?

# set system clock
$ timedatectl set-time "2012-10-30 18:17:16"
# set sytem timezone
$ timedatectl set-timezone "Europe/Lisbon"

# set RTC (real-time, battery-powered clock) to universal time and use to adjust system clock (--adjust-system-clock)
$ timedatectl set-local-rtc 0 ?--adjust-system-clock?
  • systemd-timesyncd system service is used to synchronize the local system clock with a remote NTP server.
# start systemd-timesyncd
$ timedatectl set-ntp true
# NTP server is taken from systemd-networkd's '.network' configuration file appended with
$ cat /etc/systemd/timesyncd.conf
[Time]
NTP=0.arch.pool.ntp.org 1.arch.pool.ntp.org
FallbackNTP=0.pool.ntp.org 1.pool.ntp.org

from systemd-timesyncd@arch

  • systemd-tmpfiles creates, deletes, and cleans up volatile and temporary files and directories, based on the configuration file format and location specified in tmpfiles.d.
## configuration
# systemd-tmpfiles uses files defined (usually /tmp,/run) in '{/etc,/run,/usr/lib}/tmpfiles.d/*.conf'
# create file {/etc,/run,/usr/lib}/tmpfiles.d/.conf
# '/var/{run,lock}' are symblinks to '/run'
Type Path Mode UID GID Age Argument
'Type' 'f/d' create file/directory, 'F/D' create or truncate
'Type' 'L,L+' create symblink, 'C' recursively copy
'Type' 'X/x' exclude from cleanup, 'r/R' remove
'Path' can use '%m' for machineid, '%b' for bootid, '%H' for hostname
'Age' used by 'd,D,x' to decide when to cleanup

## examples
$ cat /etc/tmpfiles.d/samba.conf
D /run/samba 0755 root root
$ cat /etc/tmpfiles.d/abrt.conf
d /var/tmp/abrt 0755 abrt abrt
x /var/tmp/abrt/*

# creates, deletes, and cleans up volatile and temporary files and directories, based on configuration
$ systemd-tmpfiles --create --remove
  • systemd-sysusers uses the files from sysusers.d directory to create system users and groups at package installation or boot time.
## configuration
'Type' 'u' creates user and group, 'g' create group, 'm' add user to group
'ID' UID, GID or '-' to automatic

## examples
$ cat /etc/sysusers.d/mypackage.conf
# Type Name ID GECOS
u httpd 440 "HTTP User"
u authd /usr/bin/authd "Authorization user"
g input - -
m authd input
u root 0 "Superuser" /root
$ systemd-sysusers /etc/sysusers.d/mypackage.conf
# set kernel YP domain name
$ cat /etc/sysctl.d/domain-name.conf
kernel.domainname=example.com

# load virtio-net.ko at boot
$ cat /etc/modules-load.d/virtio-net.conf
virtio-net

from sysctl@arch and kernel modules@arch

  • systemd-networkd is system service that manages networks, virtual network devices and low-level device links, using systemd.network, systemd.netdev and systemd.link files respectively. It detects and configures network devices as they appear, as well as creates virtual network devices. Can run alongside your usual network management (e.g.: netctl).
## '.network' configuration
[Match]
'Name=,Host=,Virtualization=' Match against a device name, hostname or virtualization only
[Network]
'DHCP=none|v4|v6|both' Enable DHCP
'DNS=' DNS server, multiple allowed
'Domains=' Domains used for DNS resolution
'Bridge=' Bridge name to add the link to
'Address=addr/netmask' Static address, short-hand for [Address]
'Gateway=' Network gateway, short-hand for [Route]
'IPMasquerade=' (219) Packets forwarded from the network interface will be appear as coming from the local host

$ cat {/etc,/run,/usr/lib}/systemd/network/.network
[Match]
Name=en*
[Network]
# either dhcp
#DHCP=v4
# or static
Address=10.4.2.111/8
Gateway=10.254.0.2
DNS=10.254.0.121
DNS=10.254.0.122
Domains=mydomain

## '.link' configuration
[Match]
'MACAddress=,Host=,Virtualization=' Match against MAC address, hostname or virtualization only
[Link]
'MACAddressPolicy=' 'persistent' or 'random' MAC address
'NamePolicy=' List of policies used to set the interface name

$ cat /usr/lib/systemd/network/99-default.link
[Link]
NamePolicy=kernel database onboard slot path
MACAddressPolicy=persistent

## '.netdev' configuration
[Match]
'Host=,Virtualization=' Match against hostname or virtualization only
[Netdev]
'Name=' Interface name, required
'Kind=' 'bridge,bond,vlan,macvlan' required

$ cat /etc/systemd/network/.netdev
[NetDev]
Name=br0
Kind=bridge
$ cat /etc/systemd/network/.network
[Match]
Name=eth*
[Network]
Bridge=br0
$ cat /etc/systemd/network/.network
[Match]
Name=br0
[Network]
# either dhcp
#DHCP=v4
# or static
DNS=192.168.1.1
Address=192.168.1.2/24
Gateway=192.168.1.1
$ brctl show

$ systemctl restart systemd-networkd

from systemd-networkd@arch

  • systemd-resolved (216) implements a caching DNS stub resolver and an LLMNR resolver and responder. Calls the DNS servers in resolved.conf.d, the per-link static settings in .network files, and the per-link dynamic settings received over DHCP. Also generates {/etc,/run/systemd/resolve}/resolv.conf for compatibility.
$ /etc/systemd/resolved.conf.d/mypackage.conf
[Resolve]
DNS=192.168.0.10 192.168.1.3 192.168.0.1
## configuration
$ man fstab
/device /mount-point fs-type mount-options
$ man mount
mount -t fs-type -o mount-options /device /fs-mount-point

[Mount]
'What=' Device node, e.g.: tmpfs
'Where=' Mount point, must match unit name where '/' replace by '-'
'Type=' FS type, e.g.: ext4
'Options=' Mount options

## usage
$ ssh-keygen ; ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.4.2.102

# either '/etc/fstab'
root@10.4.2.102:/tmp /mnt/tmp fuse.sshfs defaults,allow_other,_netdev 0 0
$ mount -a

# or '/etc/systemd/system/sshfs.mount'
[Unit]
Description=SSFHS example
[Mount]
What=root@remote:/dir
Where=/mnt/dir
Type=fuse.sshfs
Options=defaults,allow_other,_netdev
$ systemctl start /mnt/tmp

$ systemctl status /mnt/tmp

from fstab@arch

Filesystem hierarchy

file-hierarchy is a minimal, modernized subset of hier. Use systemd-path lists and query system and user paths.

'/etc' System specific configuration reserved for local admim
'/tmp' For small temporary files, use '/var/tmp' for large files, flushed on boot
'/srv' Store general server payload, managed by the admin
'/run' Runtime/volatile "tmpfs" for packages to place runtime data, flushed on boot, allways writable
'/usr' Vendor/package-supplied operating system resources, usually read-only
'/var' Persistent and variable data, must be writable, pre-populated with vendor, but reconstructed if necessary
'/dev,proc,/sys' Virtual kernel fs

# Compatibility Symlinks
'/bin,/sbin,/usr/sbin' -> '/usr/bin'
'/lib' -> '/usr/lib'
'/var/run' -> '/run'

Containers

Use systemd-nspawn to spawn a namespace containers for debugging, testing and building. Its like chroot on steroids. Use a tool like yum, debootstrap, or pacman to set up an OS directory tree suitable as file system hierarchy for systemd-nspawn containers.

machinectl used to introspect and control the state of your systemd VM and container, via systemd-machined.service.

systemd-nspawn [OPTIONS...] [COMMAND [ARGS...] ]
'-D,--directory=' Directory to use as file system root for the container
'-x,--template=' (219) Directory or "btrfs" subvolume to use as template for the container's root directory. Created (with '-D' directory) if doesnt exist.
'-x, --ephemeral' (219) Run with a temporary "btrfs" snapshot of its root directory (as configured with --directory=), that is removed immediately when the container terminates
'-i,--image=' Disk image to mount the root directory for the container from. Alternative to '-D'
'-b,--boot' Invoke init binary insted of shell or a user supplied program
'-M, --machine=' Sets the machine name for this container

'--private-network' Disconnect networking of the container from the host, except loopback
'--network-interface=' Assign the specified network interface to the container
'--network-macvlan=' Create a "macvlan" interface of the specified Ethernet network interface and add it to the container. A "macvlan" interface is a virtual interface that adds a second MAC address to an existing physical Ethernet link
'--network-veth' Create a virtual Ethernet link ("veth") between host and container
'--network-bridge=' Adds the host side of the Ethernet link created with --network-veth to the specified bridge
'-p,--port=' (219) If private networking, maps IP port on host onto IP port on container. Used with 'IPMasquerade=yes' in '.network'

'--read-only' Mount the root file system read-only for the container
'--bind=, --bind-ro=' Bind mount a file or directory from the host into the container
'--tmpfs=' Mount a tmpfs file system into the container

'--volatile=yes' Volatile/ephemeral mode. Root mounted as unpopulated "tmpfs" instance, and '/usr' from the OS tree is mounted into it, read-only
'--volatile=state' OS tree is mounted read-only, but '/var' is mounted as "tmpfs" instance into it
'--volatile=no' (default) whole tree is writable
If volatile=yes or state then all changes are lost on shutdown and container must boot with only '/usr' (able to populate '/var' automatically).

## examples
# boot a minimal Fedora in a container
$ yum -y --releasever=19 --nogpg --installroot=/srv/mycontainer --disablerepo='*' --enablerepo=fedora install systemd passwd yum fedora-release vim-minimal
$ systemd-nspawn -bD /srv/mycontainer

# spawn a shell in a container of a minimal Debian
$ debootstrap --arch=amd64 unstable ~/debian-tree/
$ systemd-nspawn -D ~/debian-tree/

# boot a minimal Arch Linux in a container
$ pacstrap -c -d ~/arch-tree/ base
$ systemd-nspawn -bD ~/arch-tree/

# boot your container at your machine startup
$ ln -s /path/to/MyContainer /var/lib/container/MyContainer
$ systemctl enable systemd-nspawn@MyContainer.service
$ systemctl start systemd-nspawn@MyContainer.service
$ machinectl list

# boot unmodified Fedora cloud images (219; dissecting of MBR disk images)
$ wget http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.raw.xz
$ unxz Fedora-Cloud-Base-20141203-21.x86_64.raw.xz
$ systemd-nspawn -i Fedora-Cloud-Base-20141203-21.x86_64.raw -b

# spawn a container on a temporary snapshot of your host's root directory, which is removed immediately when the container exits
$ systemd-nspawn -xb -D /

# first time it will create '/var/lib/container/mycontainer' from '/var/lib/container/fedora21' and boot it; on subsequent runs the container tree will already be created
$ systemd-nspawn -b -D /var/lib/container/mycontainer --template=/var/lib/container/fedora21

# socket activated OS containers
$ cat /etc/systemd/system/mycontainer.service
[Unit]
Description=My little container
[Service]
ExecStart=/usr/bin/systemd-nspawn -jbD /srv/mycontainer 3
KillMode=process
$ cat /etc/systemd/system/mycontainer.socket
[Unit]
Description=The SSH socket of my little container
[Socket]
ListenStream=23
# teach SSH inside the container socket activation
$ cat /etc/systemd/system/sshd.socket
[Unit]
Description=SSH Socket for Per-Connection Servers
[Socket]
ListenStream=23
Accept=yes
$ cat /etc/systemd/system/sshd@.service
[Unit]
Description=SSH Per-Connection Server for %I
[Service]
ExecStart=-/usr/sbin/sshd -i
StandardInput=socket
# start unit automatically when the container boots up
$ ln -s /etc/systemd/system/sshd.socket /etc/systemd/system/sockets.target.wants/

from systemd for Administrators, Part VI, systemd for Administrators, Part XX and systemd-nspawn@arch

systemd-import (219) used to pull and update containers images from Internet (docker). It converts them into brtfs subvolumes/snapshots and makes them available as simple directory trees in '/var/lib/container/' for booting with 'systemd-nspawn'.

# donwload 'mattdm/fedora', and make them available as '/var/lib/container/fedora'
$ systemd-import pull-dck mattdm/fedora
$ systemd-nspawn -M fedora

Factory Reset, Stateless Systems, Reproducible Systems & Verifiable Systems:

  • Factory reset mechanism should flush {/etc,/var} but keep /usr.
  • Stateless system (where a reboot is a factory reset) never stores {/etc,/var} on persistent storage, but always comes up with pristine vendor state.
  • Reproducible systems each has a private {/etc,/var} for receiving local configuration, and /usr is pulled in via bind mounts (in case of containers)
  • Verifiable Systems related to stateless system where storage is cryptographically ensure and {/etc,/var} are either included in image or unnecessary to boot.

How to build and package Erlang OTP applications (using rebar)

Rerbar is an Erlang build tool that makes it easy to compile and test Erlang applications, port drivers and releases. Its a self-contained binary.

Using rebar

Either install from distribution repo (if available), use prebuild binary, or compile from the source.

# requirements, see http://www.erlang.org/doc/installation_guide/INSTALL.html
$ sudo apt-get install erlang

# either from repo (not recomended, too old)
$ sudo apt-get install rebar
# or compile from source
...
# or pre-build binary (recomended)
$ wget https://github.com/rebar/rebar/wiki/rebar ; chmod +x rebar

from rebar@github

Use a template to create new application that follows OTP Design Principles.

$ mkdir rebar-helloworld ; cd rebar-helloworld
$ ../rebar create-app appid=app1
==> app1 (create-app)
Writing src/app1.app.src # Application descriptor
Writing src/app1_app.erl # Application callback module
Writing src/app1_sup.erl # Supervisor callback module

Next add a generic server to the application. A gen_server is the server implementation of a client-server. You have to fill in the pre-defined set of function in a callback module.

# either create file
$ cat src/app1_srv.erl
-module(app1_server).
-behaviour(gen_server).
-export([start_link/0, say_hello/0]).
-export([init/1, handle_call/3, handle_cast/2, handle_info/2,
         terminate/2, code_change/3]).
start_link() ->
    gen_server:start_link({local, ?MODULE}, ?MODULE, [], []).
init([]) ->
    {ok, []}.
say_hello() ->
    gen_server:call(?MODULE, hello).
%% callbacks
handle_call(hello, _From, State) ->
    io:format("Hello from server!~n", []),
    {reply, ok, State};
handle_call(_Request, _From, State) ->
    Reply = ok,
    {reply, Reply, State}.
handle_cast(_Msg, State) ->
    {noreply, State}.
handle_info(_Info, State) ->
    {noreply, State}.
terminate(_Reason, _State) ->
    ok.
code_change(_OldVsn, State, _Extra) ->
    {ok, State}.

# or use template
$ rebar create template=simplesrv srvid=app1_srv
==> app1 (create)
Writing src/app1_srv.erl

# and add say_hello/0 function
-export([start_link/0, say_hello/0, stop/0]).
%% API Function Definitions
say_hello() ->
    gen_server:call(?MODULE, hello).
stop() ->
    gen_server:cast(?MODULE, stop).
%% callbacks (gen_server Function Definitions)
handle_call(hello, _From, State) ->
    io:format("Hello from srv!~n", []),
    {reply, ok, State};
handle_call(_Request, _From, State) ->
    Reply = ok,
    {reply, Reply, State}.
handle_cast(stop, State) ->
    {stop, normal, State};

We could compile now, but lets create another application lib1 and compile both.

# move both apps to '/apps'
$ mkdir -p apps/{app1,lib1}
$ mv src apps/app1/
$ cd apps/lib1
$ rebar create-app appid=lib1
$ cat src/hello.erl
-module(hello).
-export([hello/0]).
hello() ->
    io:format("Hello for lib!", []).

# compile both
$ cd ../..
$ cat rebar.config
{sub_dirs, ["apps/app1", "apps/lib1"] }.
$ tree
.
├── apps
│   ├── app1
│   │   └── src
│   │       ├── app1_app.erl
│   │       ├── app1.app.src
│   │       ├── app1_srv.erl
│   │       └── app1_sup.erl
│   └── lib1
│       └── src
│           ├── hello.erl
│           ├── lib1_app.erl
│           ├── lib1.app.src
│           └── lib1_sup.erl
└── rebar.config
$ rebar compile
==> app1 (compile)
==> src (compile)
==> lib1 (compile)
Compiled src/hello.erl
==> src (compile)
==> app1 (compile)

# test (in development)
$ erl -pa apps/*/ebin
1> app1_srv:start_link().
{ok,}
2> app1_srv:say_hello().
Hello from server!
3> app1_srv:stop().
ok
4> hello:hello().
Hello from lib!
ok

How can I call a library function from lib1 in a app1 ?

$ cat apps/app1/src/app1_srv.erl
...
%% callbacks
handle_call(hello, _From, State) ->
    hello:hello(),
    io:format("~nHello from srv!~n", []),
    {reply, ok, State}.
$ cat apps/app1/src/app1_sup.erl
...
init([]) ->
    {ok, {{one_for_one, 1, 60}, [?CHILD(app1_srv, worker)]}}.

# recompile and test
$ rebar clean compile
$ erl -pa apps/*/ebin
1> app1_sup:start_link().
{ok,}
2> app1_srv:say_hello().
Hello for lib!Hello from server!
ok

To make the application work like a service (start, stop, console, …), create release. Its a complete system consisting of these applications and a subset of the Erlang/OTP applications. Including a way to make application work like a service (start, stop, console, …).

$ mkdir rel; cd rel
$ rebar create-node nodeid=app1
==> rel (create-node)
Writing reltool.config
Writing files/erl
Writing files/nodetool
Writing files/app1
Writing files/sys.config
Writing files/vm.args
Writing files/app1.cmd
Writing files/start_erl.cmd
Writing files/install_upgrade.escript
$ cd -

$ cat rel/reltool.config
{lib_dirs, ["../apps"]},

$ cat rebar.config
{sub_dirs, ["apps/app1", "apps/lib1", "rel"] }.

# compile and generate release
$ rebar clean compile generate
# if "ERROR: Unable to generate spec: read file info /usr/lib/erlang/man/man1/XPTO.gz" failed then "sudo rm /usr/lib/erlang/man/man1/XPTO.gz"
# ignore "WARN:  'generate' command does not apply to directory", see https://github.com/rebar/rebar/issues/253
$ ls rel/app1
bin  erts-6.1  etc  lib  log  releases

# test using console
$ ./rel/app1/bin/app1 console
# if "...'cannot load',elf_format,get_files}}" then add to 'reltool.config'
{app, hipe, [{incl_cond, exclude}]},
mysample@127.0.0.1)1> application:which_applications().
[{app1,[],"1"},
 {sasl,"SASL  CXC 138 11","2.4"},
 {stdlib,"ERTS  CXC 138 10","2.1"},
 {kernel,"ERTS  CXC 138 10","3.0.1"}]
(mysample@127.0.0.1)2> app1_srv:say_hello().
Hello for lib!Hello from server!
ok

# start and attach to get console
$ ./rel/app1/bin/app1 start ; ./rel/app1/bin/app1 attach

# deploy/export
$ tar -C rel -czvf app1.tar.gz app1

from release-handling@github

Rebar makes building Erlang releases easy. One of the advantages of using OTP releases is the ability to perform hot-code upgrades. To do this you need to build a upgrade package that contains the built modules and instructions telling OTP how to upgrade your application.

# generate first release
$ rebar clean compile generate
$ mv rel/app1 rel/app1_1.0

# change 'hello' function
$ cat apps/app1/src/app1_srv.erl
handle_call(hello, _From, State) ->
    hello:hello(),
    {_,{Hour,Min,Sec}} = erlang:localtime(),
    io:format("Hello from server at ~2w:~2..0w:~2..0w!~n", [Hour,Min,Sec]),
    {reply, ok, State};

# bump version and generate new release
$ cat rel/reltool.config
       {rel, "app1", "2",
$ cat ../apps/app1/src/app1.app.src
  {vsn, "2"},
$ rebar clean compile generate
$ tree rel -d -L 2
rel
├── app1
│   ├── bin
│   ├── erts-6.1
│   ├── lib
│   ├── log
│   └── releases
│       └── 1
├── app1_1.0
│   ├── bin
│   ├── erts-6.1
│   ├── lib
│   ├── log
│   └── releases
│       └── 2
└── files

In order to make an upgrade, you must have a valid .appup file. This tells the erlang release_handler how to upgrade and downgrade between specific versions of your application.

# generate '.appup' upgrade instructions
$ cd rel ; rebar generate-appups previous_release=app1_1.0
==> rel (generate-appups)
Generated appup for app1
Appup generation complete
# see './app1/lib/app1-2/ebin/app1.appup'
% appup generated for app1 by rebar ("2015/01/23 16:03:24")
{"2", [{"1", [{update,app1_srv,{advanced,[]},[]}]}], [{"1", []}]}.

# now create the upgrade package
$ cd rel ; rebar generate-upgrade previous_release=app1_1.0
==> rel (generate-upgrade)
app1_2 upgrade package created
$ ls rel
app1  app1_1.0  app1_2.tar.gz  files  reltool.config

# install upgrade using 'release_handler'
$ mv rel/app1_2.tar.gz rel/app1_1.0/releases
$ ./rel/app1_1.0/bin/app1 console
1> release_handler:which_releases().
[{"app1","1",[],permanent}]
2> release_handler:unpack_release("app1_2").
{ok,"2"}
3> release_handler:install_release("2").
{ok,"1",[]}
4> release_handler:make_permanent("2").
ok
5> app1_srv:say_hello().
Hello for lib!Hello from server at 16:15:24!
ok
6> release_handler:which_releases().
[{"app1","2",[],permanent},{"app1","1",[],old}]

# generating a v3
$ mv rel/app1 rel/app1_2.0
# make code change ... and compile/generate upgrade package
$ rebar clean compile generate
$ cd rel ; generate-appups previous_release=app1_2.0
$ rebar generate-upgrade previous_release=app1_2.0
$ ls
app1  app1_1.0  app1_2.0  app1_3.tar.gz  files  reltool.config

from upgrades@github and rebar tutorial.

Rebar can fetch and build projects including source code from external sources (git, hg, etc.). See erlang-libs.

$ cat rebar.config
deps, [
    {'erlcloud', ".*", { git, "https://github.com/gleber/erlcloud.git"}},
    {'lager', ".*", { git, "git://github.com/basho/lager.git"} }
]}.
$ rebar update-deps
$ cat src/app1.app.src
    {applications, [
        ..., erlcloud, lager
    ]},

from [https://github.com/rebar/rebar/wiki/Dependency-management](dependency management)

All code is available in erlang-otp-helloworld@github.

Using rebar3

Rebar3 is an experimental branch that tries to solve some issues, see annoucement.

$ wget https://s3.amazonaws.com/rebar3/rebar3 ; chmod +x rebar3

It comes with templates for creating applications, library applications (with no start/2), releases and plugins. Use the new command to create a project with from a template. It accepts lib, app, release and plugin as the first argument and the name for each as the second argument.

# create new application and release
$ rebar3 new release app1
===> Writing app1/apps/app1/src/app1_app.erl
===> Writing app1/apps/app1/src/app1_sup.erl
===> Writing app1/apps/app1/src/app1.app.src
===> Writing app1/rebar.config
===> Writing app1/config/sys.config
===> Writing app1/config/vm.args
===> Writing app1/.gitignore
===> Writing app1/LICENSE
===> Writing app1/README.md

# optionally add dependencies
$ cat rebar.config
{deps, [{cowboy, {git, "git://github.com/ninenines/cowboy.git", {tag, "1.0.1"}}}]}.

# add host to nodename otherwise you get "Can't set long node name" when starting console
$ cat config/vm.args
-name app1@127.0.0.1

# and release (uses relx instead of reltool)
$ rebar3 release
===> Resolved app1-0.1.0
===> Dev mode enabled, release will be symlinked
===> release successfully created!
# if "Missing beam file elf_format ... elf_format.beam" then 'sudo apt-get install erlang-base-hipe'

# test using console
$ ./_build/rel/app1/bin/app1-0.1.0 console
1> app1_srv:say_hello().
Hello from server!
ok

# deploy/export
$ REBAR_PROFILE=prod rebar3 tar
===> tarball .../_build/rel/app1/app1-0.1.0.tar.gz successfully created!

# upgrading
$ cat apps/app1/src/app1.app.src
,{vsn, "0.2.0"}
$ cat rebar.config
{relx, [{release, {'app1', "0.2.0"},
$ cat apps/app1/src/app1_srv.erl
    io:format("Hello from server v2!~n"),
$ mv _build/rel _build/rel_0.1.0
$ REBAR_PROFILE=prod ../rebar3 tar
===> tarball .../_build/rel/app1/app1-0.2.0.tar.gz successfully created!
$ cp _build/rel/app1/app1-0.2.0.tar.gz _build/rel_0.1.0/app1/releases/app1_0.2.0.tar.gz
$ _build/rel_0.1.0/app1/bin/app1 start
$ _build/rel_0.1.0/app1/bin/app1 upgrade 0.2.0
# you will get "noent ... reup" because '.appup' is missing, see https://github.com/rebar/rebar3/issues/57

from basic usage

Using iptables, shorewall, firewalld, ufw and ipset to block, masquerade (SNAT) and port forward (DNAT) in Linux

Firewall is a network security system that controls the incoming and outgoing network traffic based on an applied rule set. Some common usage is to block incomming traffic by port, do source NAT / masquerading and destination NAT / port forward.

  • iptables@man/iptables@wiki is a user-space application program that allows a system administrator to configure the tables provided by the Linux kernel firewall netfilter@wiki kernel modules and the chains and rules it stores. Will probably be replaced by Nftables@wiki, a VM able to execute bytecode to inspect network packets and make decisions.
## display status
$ iptables -L -n -v

## flush/delete and default policy
'-F' deleting (flushing) all the rules
'-X' delete chain
'-t table_name' select table
'-P' set the default policy (such as DROP, REJECT, or ACCEPT)
$ iptables -F -X
$ iptables -P INPUT ACCEPT ; iptables -P OUTPUT ACCEPT ; iptables -P FORWARD ACCEPT

## delete rules
'-D' delete one or more rules from the selected chain (by line number or source)
$ iptables -D INPUT 4
$ iptables -D INPUT -s 202.54.1.1 -j DROP

## insert rules
# insert rule between 1 and 2
$ iptables -I INPUT 2 -s 202.54.1.2 -j DROP

## save/restore rules 
$ service iptables save
$ iptables-save > /root/my.active.firewall.rules
$ iptables-restore < /root/my.active.firewall.rules
$ service iptables restart

## set default policy
# drop all incoming / forwarded packets, but allow outgoing traffic
$ iptables -P INPUT DROP ; iptables -P FORWARD DROP ; iptables -P OUTPUT ACCEPT
$ iptables -A INPUT -m state --state NEW,ESTABLISHED -j ACCEPT

## drop private network address on public interface
$ iptables -A INPUT -i eth1 -s 192.168.0.0/24 -j DROP
$ iptables -A INPUT -i eth1 -s 10.0.0.0/8 -j DROP

## block by source IP, destination PORT, destination IP, source MAC address or protocol (ICMP)
$ iptables -A INPUT -s 1.2.3.4 -j DROP
$ iptables -A INPUT -p tcp --dport 80 -j DROP
$ iptables -A OUTPUT -d 75.126.153.206 -j DROP
$ iptables -A INPUT -m mac --mac-source 00:0F:EA:91:04:08 -j DROP
$ iptables -A INPUT -p icmp --icmp-type echo-request -j DROP

## open destination PORT ranges or IP source ranges
$ iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 7000:7010 -j ACCEPT
$ iptables -A INPUT -p tcp --destination-port 80 -m iprange --src-range 192.168.1.100-192.168.1.200 -j ACCEPT
# SNAT example
$ iptables -t nat -A POSTROUTING -j SNAT --to-source 192.168.1.20-192.168.1.25

## allow/block common ports (replace ACCEPT by DROP)
# ssh=tcp/22, http=tcp/80, ntp=udp/123
$ iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
$ iptables -A INPUT -m state --state NEW -p tcp --dport 80 -j ACCEPT
$ iptables -A INPUT -m state --state NEW -p udp --dport 123 -j ACCEPT

# restrict parallel connections per source IP
$ iptables -A INPUT -p tcp --syn --dport 22 -m connlimit --connlimit-above 3 -j REJECT
$ iptables -p tcp --syn --dport 80 -m connlimit --connlimit-above 20 --connlimit-mask 24 -j DROP

## IP/PORT REDIRECT: alters the destination IP/PORT address to send to the machine itself
# redirect incomming tcp/25 to 2525
$ iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 25 -j REDIRECT --to-port 2525

## DNAT
# forward incoming $DNS_IP:53 to $DMZ_DNS_IP
$ iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 53 -d $DNS_IP -j DNAT --to-destination $DMZ_DNS_IP

## IP markerading (SNAT)
# source IP for each stream will be allocated randomly from these 'to-source' range
$ iptables -t nat -A POSTROUTING -p tcp -o eth0 -j SNAT --to-source 194.236.50.155-194.236.50.160:1024-32000

## log
$ iptables -A INPUT -i eth1 -s 10.0.0.0/8 -j LOG --log-prefix "IP_SPOOF A: "
$ grep --color 'IP SPOOF' /var/log/messages
# same but limit log entries
$ iptables -A INPUT -i eth1 -s 10.0.0.0/8 -j LOG --log-prefix "IP_SPOOF A: " -m limit --limit 5/m --limit-burst 7

## debug using ipt_LOG
$ modprobe ipt_LOG
$ iptables -t raw -A OUTPUT -p tcp -j TRACE
$ iptables -t raw -A PREROUTING -p tcp -j TRACE
$ grep DPT=9081 /var/log/kernel.log
$ iptables -t raw -D OUTPUT -p tcp -j TRACE
$ iptables -t raw -D PREROUTING -p tcp -j TRACE

## display natted connections using netstat-nat
$ sudo apt-get install netstat-nat
# display NAT connections with protocol
$ netstat-nat -np
# display SNAT/DNAT connections
$ netstat-nat -S ; netstat-nat -D

from iptable examples, port redirect and netstat-nat. See also iptables HowTo and iptables tutorial

## install
$ sudo apt-get install shorewall | sudo yum install shorewall (EPEL)
# note: you need 2 NICs, see  two-interfaces
$ rpm -ql shorewall | fgrep two-interfaces
/usr/share/doc/shorewall-4.6.1.2/Samples/two-interfaces

## define zones, interfaces and policy
$ cat /etc/shorewall/zones /etc/shorewall/interfaces /etc/shorewall/policy
...
#ZONE   TYPE    OPTIONS                 IN                      OUT
#                                       OPTIONS                 OPTIONS
fw      firewall
net     ipv4
loc     ipv4
...
#ZONE   INTERFACE       OPTIONS
net     eth0            dhcp,tcpflags,nosmurfs,routefilter,logmartians,sourceroute=0
loc     eth1            tcpflags,nosmurfs,routefilter,logmartians
...
#SOURCE         DEST            POLICY          LOG LEVEL       LIMIT:BURST
loc             net             ACCEPT
net             all             DROP            info
# THE FOLLOWING POLICY MUST BE LAST
all             all             REJECT          info

## define rules
$ cat /etc/shorewall/rules
...
# Accept SSH connections from the internet to FW and from FW to local
SSH(ACCEPT)     net             $FW
SSH(ACCEPT)     $FW             loc
# Allow Ping from the local network to firewall
Ping(ACCEPT)    loc             $FW
# Allows DNS,Web,Ping access from your firewall to internet
DNS(ACCEPT)     $FW             net
Web(ACCEPT)     $FW             net
Ping(ACCEPT)    $FW             net
# Forward HTTP from net:9081,80 to local network
DNAT            net             loc:192.168.0.1:8081   tcp     9081
DNAT            net             loc:192.168.0.1:80     tcp     9080
# Forward RDP from net:9389 to local network
DNAT            net             loc:192.168.0.1:3389   tcp     9389

## check for errors
$ cat /etc/shorewall/shorewall.conf
STARTUP_ENABLED=Yes
$ shorewall check
ok

## service start/restart
$ iptables-save > /root/old.firewall.config
$ systemctl stop iptables
$ systemctl enable shorewall
$ systemctl start shorewall

## re-apply rule changes
$ shorewall restart

## list rules, see http://shorewall.net/manpages/shorewall.html
$ shorewall show
$ shorewall show nat 
$ iptables -L -v -t nat
# show connections being firewalled
$ shorewall show connections
# show logs
$ shorewall show hits

## show macros, see http://shorewall.net/Macros.html
$ shorewall show macros
$ shorewall show macro Web

from Shorewall@centos and Shorewall@ubuntu

  • firewalld is the new userland interface in RHEL 7. Its basically an easy way to write iptable rules.
## install
$ yum install firewalld
$ systemctl status firewalld
# activate port forwarding
$ cat /etc/sysctl.conf
net.ipv4.ip_forward=1
$ sysctl -p

## zone management
$ firewall-cmd --get-zones
# change the default zone permanently 
$ firewall-cmd --set-default-zone=home
$ firewall-cmd --get-default-zone
# assign eth0 temporary to the internal zone
$ firewall-cmd --zone=internal --change-interface=eth0
$ firewall-cmd --get-zone-of-interface=eth0
$ firewall-cmd --zone=public --list-all

## source management zone can be bound to a network interface and/or to a network addressing, called source
# add source '192.168.2.0/24' to a zone permanently
$ firewall-cmd --permanent --zone=trusted --add-source=192.168.2.0/24
$ firewall-cmd --permanent --zone=trusted --list-sources
$ firewall-cmd --get-active-zones

## service management: add services to each zone
# allow the http service permanently in the internal zone
$ firewall-cmd --permanent --zone=internal --add-service=http # or '–remove-service=http' to deny the http service
$ firewall-cmd --reload
$ firewall-cmd --list-services --zone=internal

## service firewall configuration: define new services, other then '/usr/lib/firewalld/services'
$ cat /etc/firewalld/services/haproxy.xml
<?xml version="1.0" encoding="utf-8"?>
<service>
 <short>HAProxy</short>
 <description>HAProxy load-balancer</description>
 <port protocol="tcp" port="80"/>
</service>
# assign the correct SELinux context and file permissions
$ cd /etc/firewalld/services ; restorecon haproxy.xml; chmod 640 haproxy.xml

## port management (same as service management)
$ firewall-cmd --zone=internal --add-port=443/tcp # or '–remove-port=443/tcp' to deny the port
$ firewall-cmd --reload
$ firewall-cmd --zone=internal --list-ports

## masquerading: configure masquerading on the external zone to hide internal addresses
$ firewall-cmd --zone=external --add-masquerade # or '–remove-masquerade', or '–query-masquerade'

## port forwarding
$ firewall-cmd --zone=external --add-forward-port=port=22:proto=tcp:toport=3753 # or '–remove-forward-port', or '–query-forward-port'
# same but defines the destination ip address
$ firewall-cmd --zone=external --add-forward-port=port=22:proto=tcp:toport=3753:toaddr=10.0.0.1

## direct rules:bypass firewalld
$ firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p tcp --dport 9000 -j ACCEPT
$ firewall-cmd --reload
$ firewall-cmd --direct --get-all-rules

from what is firewalld and firewalld@rhel

  • ufw/ufw@wiki Uncomplicated Firewall is a firewall that is designed to be easy to use, best for desktop usage. Its ubuntu’s alternative to firewalld.
## install
$ sudo apt-get install ufw
$ sudo ufw status|enable|disable|show

## configure
# allow access
$ sudo ufw allow ssh | sudo ufw allow ssh/tcp
# deny access
$ sudo ufw deny ftp
# add specific source IP ranges
$ sudo ufw allow from 192.168.0.104/24
# add specific source and destination port ranges
$ sudo ufw allow 2290:2300/tcp
$ sudo ufw allow to any port 22
# combine parameters
$ sudo ufw allow from 192.168.0.104 proto tcp to any port 22
# delete rules (by number)
$ sudo ufw status numbered ; sudo ufw delete 1
# reset/delete all rules
$ sudo ufw reset
# basic: deny by default and allow per-application
$ ufw default deny; ufw allow SSH

## application configuration
$ ufw app list
$ ls /etc/ufw/applications.d
$ cat /etc/ufw/applications.d/Deluge-my
[Deluge-my]
title=Deluge
description=Deluge BitTorrent client
ports=20202:20205/tcp
$ ufw delete allow Deluge ; ufw allow Deluge-my

## rate limiting
# deny connections from an IP address that has attempted to initiate 6 or more connections in the last 30 seconds
$ ufw limit SSH

from How to Install and Configure UFW – An Un-complicated FireWall in Debian/Ubuntu and ufw@arch

  • ipset framework to administer sets of IP addresses, more flexible alternative to CIDR prefixes
## using iptables
$ iptables -A INPUT -s 1.1.2.0/24 -p TCP -j DROP
# problem is to block ipsets without common CIDR prefix

## using ipset
$ sudo apt-get install ipset | sudo yum install ipset

## define set 'banthis', see http://ipset.netfilter.org/features.html
$ ipset create banthis hash:net
$ ipset add banthis 1.1.1.1/32 ; ipset add banthis 1.1.2.0/24
$ ipset list

## use ipsets (to block access to '80')
$ iptables -I INPUT -m set --match-set banthis src -p tcp --destination-port 80 -j DROP

## use public ip block lists, e.g.: iblocklist.com
$ sudo pip install iblocklist2ipset | sudo python-pip install iblocklist2ipset
# goto iblocklist.com and copy-pate UPDATE-URL
$ iblocklist2ipset generate --ipset banthis $UPDATE-URL banthis.txt
$ ipset restore -f banthis.txt
$ ipset list banthis

from howto block unwanted ip addresses

How to do static code analysis in C/C++ (using sparse, splint, cpplint and clang)

Static program analysis is basically analysis looking at the source code without executing it (as opposed to dynamic analysis). Generally used to find bugs or ensure conformance to coding guidelines.

  • sparse@wiki/sparse@man is a static analysis tool that was initially designed to only flag constructs that were likely to be of interest to kernel developers, such as the mixing of pointers to user and kernel address spaces. cgcc@man is a perl-script compiler wrapper to run Sparse after compiling.
## install
$ sudo apt-get install sparse | $ sudo yum install sparse (EPEL)

## example
$ cat test.c
include <stdio.h>
int main(void) {
        int *p = 0;
        printf("Hello, Worldn");
        return 0;
}

$ cgcc -Wsparse-all -c test.c
(or make CC=cgcc)
test.c:4:18: warning: Using plain integer as NULL pointer

# if you get error: "unable to open 'sys/cdefs.h'" then
$ sudo ln -s /usr/include/x86_64-linux-gnu/sys /usr/include/sys
$ sudo ln -s /usr/include/x86_64-linux-gnu/bits /usr/include/bits
$ sudo ln -s /usr/include/x86_64-linux-gnu/gnu /usr/include/gnu
  • splint/splint@wiki/splint@man statically checking C programs for security vulnerabilities and coding mistakes. Formerly called LCLint, it is a modern version of the Unix lint tool. Project’s last update was November 2010.
## install
$ sudo apt-get install splint | $ sudo yum install splint (EPEL)

## example
$ cat test2.c
#include <stdio.h>
int main()
{
    char c;
    while (c != 'x');
    {
        c = getchar();
        if (c = 'x')
            return 0;
        switch (c) {
        case 'n':
        case 'r':
            printf("Newlinen");

    }
    return 0;
}

$ splint -hints test2.c
test2.c: (in function main)
test2.c:5:12: Variable c used before definition
test2.c:5:12: Suspected infinite loop.  No value used in loop test (c) is modified by test or loop body.
test2.c:7:9: Assignment of int to char: c = getchar()
test2.c:8:13: Test expression for if is assignment expression: c = 'x'
test2.c:8:13: Test expression for if not boolean, type char: c = 'x'
test2.c:18:1: Parse Error. (For help on parse errors, see splint -help parseerrors.)
*** Cannot continue.
## install
$ wget http://google-styleguide.googlecode.com/svn/trunk/cpplint/cpplint.py
$ chmod +x cpplint.py

## example
$ ./cpplint.py --extensions=c test2.c 
test2.c:0:  No copyright message found.  You should have a line: "Copyright [year] <Copyright Owner>"  [legal/copyright] [5]
test2.c:3:  { should almost always be at the end of the previous line  [whitespace/braces] [4]
test2.c:5:  Empty loop bodies should use {} or continue  [whitespace/empty_loop_body] [5]
test2.c:14:  Line ends in whitespace.  Consider deleting these extra spaces.  [whitespace/end_of_line] [4]
test2.c:14:  Redundant blank line at the end of a code block should be deleted.  [whitespace/blank_line] [3]
Done processing test2.c
Total errors found: 5
## install
$ sudo aptitude install clang | sudo yum install clang (EPEL)

## example
$ cat test3.c 
void test() {
  int x;
  x = 1; // warn
}

$ clang --analyze test3.c 
test3.c:3:3: warning: Value stored to 'x' is never read
  x = 1; // warn
  ^   ~
1 warning generated.

$ scan-build gcc -c test3.c 
scan-build: Using '/usr/lib/llvm-3.5/bin/clang' for static analysis
test3.c:3:3: warning: Value stored to 'x' is never read
  x = 1; // warn
  ^   ~
1 warning generated.
scan-build: 1 bug found.

How to stress test CPU/Memory/FS in Linux (using stress/stress-ng)

Hardware (CPU/memory/fs) stress testing is intended to make a machine work hard and trip hardware issues such as thermal overruns as well as operating system bugs that only occur when a system is being thrashed hard. Its not intended to do benchmarking.

  • stress@man is a tool to impose load on and stress test systems.
## install
$ sudo apt-get install stress | sudo yum install stress (EPEL)

## using
stress [OPTION [ARG]] ...
'-n,--dry-run' show what would have been done
'-t,--timeout N' timeout after N seconds

'-c,--cpu N' spawn N workers spinning on sqrt()

'-m,--vm N' spawn N workers spinning on malloc()/free()
'--vm-bytes B' malloc B bytes per vm worker (default is 256MB)

'-i,--io N' spawn N workers spinning on sync()
'-d, --hdd N' spawn N workers spinning on write()/unlink()
'--hdd-bytes B' write B bytes per hdd worker (default is 1GB)

## examples
# stress using cpu-bound task
$ stress -c 2
# stress using 2 cpu-bound processes, 1 io-bound procss and 1 memmory allocator process
$ stress -c 2 -i 1 -m 1 --vm-bytes 128M -t 10s

$ top,uptime,tload

from stress@cyberciti

  • stress-ng updated version of stress tool that includes CPU compute, Cache thrashing, Drive stress, I/O syncs, VM stress, Socket stressing, Context switching, Process creation and termination
## install
# note: ubuntu repo is too old, compile from source instead
$ wget http://kernel.ubuntu.com/~cking/tarballs/stress-ng/stress-ng-0.03.13.tar.gz ; tar zxf stress-ng*.tar.gz ; cd stress-ng* ; make

## usage
stress-ng [OPTION [ARG]] ...
'-metrics-brief' enable metrics and only show non-zero results
'--random N' start N random workers
'--sequential N' run all stressors one by one, invoking N of them

'-c,--cpu N' start N workers spinning on sqrt(rand())
'--cpu-method m' specify stress cpu method m, default is 'all'
'--cpu-ops N' stop when N cpu bogo operations completed

## examples (run as root to avoid stressors being killed)
# run for 60s with 4 cpu-bound, 2 io stressors and 1 vm stressor using 1G of vm
$ stress-ng --cpu 4 --io 2 --vm 1 --vm-bytes 1G --timeout 60s --metrics-brief
# run for 5m strssing all CPUs using fft
$ stress-ng --cpu 0 --cpu-method fft -t 5m --times
# runs 4 simultaneous instances of all stressors sequencially, each for 6 mins
$ stress-ng --sequential 4 --timeout 6m --metrics
# run 2 fft cpu stressors, and after 5000 ops
$ stress-ng --cpu 2 --cpu-method fft --cpu-ops 5000 --metrics−brief
# stress all cpus using all methods for 2h
$ stress−ng --cpu 0 --cpu-method all -t 2h
# run 128 stressors randomly chosen
$ stress-ng --random 128
# run all stressors one by one for 5mins, one stressor instance for each cpu
$ stress-ng --sequential 0 -t 5m
# run all io stressors one by one for 1m each, w/ instances 8 concurrently
$ stress-ng --sequential 8 --class io -t 1m --times

from stress@cyberciti

How to create a deb/rpm package for Linux (using rpmdev/mock, dpkg-buildpackage/debuild/pbuilder and fpm)

RPM and DEB are binary software packing and distribution formats used by RedHat/Debian-based distributions respectively. Both take the package source from upstream (usually a tarball) and set of scripts/rules to build binary package.

RPM

  • rpmbuild@man builds RPM packages. It takes a spec file with steps/snippets and a build tree with files and generates rpm (and optionally srpm) from source in a tarball. Use rpmdev-newspec to create a new spec file template, rpmdev-setuptree/rpmdev-wipetree to create/wipe a RPM build tree.
## install
$ sudo yum group info "Development Tools" (includes gcc, rpmdevtools, mock)

## using
rpmbuild -bSTAGE|-tSTAGE [ rpmbuild-options ] FILE ...
'-bSTAGE,-tSTAGE' build up to a stage, '-b' uses spec file givem where '-t' uses spec inside tarball
'-ba' build binary and source packages (after doing the %prep, %build, and %install stages)
'-bb' build a binary package (after doing the %prep, %build, and %install stages)
'-bp' executes the "%prep" stage from the spec file
'-bc' do the "%build" stage from the spec file (after doing the %prep stage)
'-bi' do the "%install" stage from the spec file (after doing the %prep and %build stages)
'-bl' do a "list check", the "%files" section from the spec file is macro expanded
'-bs' build just the source package
'--showrc' show the rpm macros used in spec files

## build
$(myusername) rpmdev-setuptree
$ cd ~/rpmbuild/SOURCES
$ wget http://ftp.gnu.org/gnu/hello/hello-2.8.tar.gz
$ cd ~/rpmbuild/SPECS
$ rpmdev-newspec hello

# since it uses i18 files then add 'gettext' as a build requirement, use '%find_lang %{name}' to file files in '%install' and found files in '%files -f %{name}.lang'
# since it also uses info files then use 'install-info' in '%post/%preun' to install/uninstall from system
$ cat hello.spec
Name:           hello
Version:        2.8
Release:        1%{?dist}
Summary:        The "Hello World" program from GNU
License:        GPLv3+
URL:            http://ftp.gnu.org/gnu/%{name}
Source0:        http://ftp.gnu.org/gnu/%{name}/%{name}-%{version}.tar.gz
BuildRequires: gettext    
Requires(post): info
Requires(preun): info
%description 
The "Hello World" program, done with all bells and whistles of a proper FOSS project, 
including configuration, build, internationalization, help files, etc.
%prep
%setup -q
%build
%configure
make %{?_smp_mflags}
%install
%make_install
%find_lang %{name}
rm -f %{buildroot}/%{_infodir}/dir
%post
/sbin/install-info %{_infodir}/%{name}.info %{_infodir}/dir || :
%preun
if [ $1 = 0 ] ; then
/sbin/install-info --delete %{_infodir}/%{name}.info %{_infodir}/dir || :
fi
%files -f %{name}.lang
%doc AUTHORS ChangeLog COPYING NEWS README THANKS TODO
%{_mandir}/man1/hello.1.gz
%{_infodir}/%{name}.info.gz
%{_bindir}/hello
%changelog
* Tue Sep 06 2011 The Coon of Ty  2.8-1
- Initial version of the package

$ rpmbuild -ba hello.spec
...
~/rpmbuild/SRPMS/hello-2.8-1.el7.centos.src.rpm

## lint (check confirmance)
$ rpmlint hello.spec ../SRPMS/hello* ../RPMS/*/hello*
...
$ rpmlint -I description-line-too-long 
Your description lines must not exceed 80 characters. If a line is exceeding this number, cut it to fit in two lines.

from How to create a GNU Hello RPM package
see also Packaging Guidelines

  • mock takes a srpm and creates a chroot build environment ensuring that build requirements are correct. You can build against a different environment.
$ sudo usermod -a -G mock myusername
$ newgrp mock

# build against given env chooted in /var/lib/mock/epel-6-x86_64/root
$ export MOCK_CONFIG=/etc/mock/epel-6-x86_64.cfg
$ /usr/bin/mock -r $MOCK_CONFIG --verbose --rebuild ../SRPMS/*.src.rpm
$ ls /var/lib/mock/epel-6-x86_64/result
build.log                hello-2.8-1.el6.x86_64.rpm            root.log
hello-2.8-1.el6.src.rpm  hello-debuginfo-2.8-1.el6.x86_64.rpm  state.log
$ /usr/bin/mock -r $MOCK_CONFIG --clean

# if you build requires packages not in repo then
$ /usr/bin/mock -r $MOCK_CONFIG --init
$ /usr/bin/mock -r $MOCK_CONFIG --install PACKAGE_NAME_OR_PATH_TO_RPM
$ /usr/bin/mock -r $MOCK_CONFIG --no-clean /PATH/TO/SRPM
$ /usr/bin/mock -r $MOCK_CONFIG --copyin /PATH/TO/SRPM /tmp
$ mock -r $MOCK_CONFIG --shell
$ cd ; rpmbuild --rebuild /tmp/SRPM_NAME

## speedup
$ cat /etc/mock/site-defaults.cfg
config_opts['macros']['%_smp_mflags'] = "-j17"

from Using Mock to test package builds

DEB

## install
$ sudo apt-get install dh-make debhelper devscripts fakeroot

## usage
dpkg-buildpackage [...]
'-F' (default) builds binaries and sources
'-g,-G' source and arch-indep/specific build
'-b,-B' binary-only, no-source/arch-specific/arch-indep
'-S' source-only
'-tc,-nc' clean/dont-clean source tress when finished
'-us,-uc' unsigned source package / changes file.

## example
# fetch source
$ mkdir hello ; cd $_
$ wget http://ftp.gnu.org/gnu/hello/hello-2.6.tar.gz
$ tar zxf hello-*.tar.gz ; cd hello-*

# create control files with 'dh_make'
# note: dont re-run dh_make, it isnt idempotent
$ dh_make -s -y --copyright gpl -f ../hello-*.tar.gz
$ cd debian
$ rm -f rm *.ex *.EX README.Debian info docs
$ tree
.
├── changelog
├── compat
├── control
├── copyright
├── README.source
├── rules
└── source
    └── format

# update changelog
$ dch -i "Added README.Debian"
# edit control file, adding any extra 'Build-Depends'; note that 'Depends' are automatically added
$ cat control
Source: hello
Section: unknown
Priority: optional
Maintainer: Rui Coelho 
Build-Depends: debhelper (>= 9), autotools-dev
Standards-Version: 3.9.5
Homepage: 
#Vcs-Git: git://anonscm.debian.org/collab-maint/hello.git
#Vcs-Browser: http://anonscm.debian.org/?p=collab-maint/hello.git;a=summary

Package: hello
Architecture: any
Depends: ${shlibs:Depends}, ${misc:Depends}
Description: 

# build
# uses rules file to call Makefile
$ cd .. ; dpkg-buildpackage -rfakeroot

# check for errors and view content
$ cd .. ; ls
hello-2.6  hello_2.6-1_amd64.changes  hello_2.6-1_amd64.deb  hello_2.6-1.debian.tar.xz  hello_2.6-1.dsc  hello_2.6.orig.tar.gz  hello-2.6.tar.gz
# '.dsc' is source code content sumamry, used when unpacking source with 'dpkg-source'
# '.debian.tar.xz' is the 'debian' directory, each update is stored as quilt patch in 'debian/patches'
# '.deb' is complete binary, use 'dpkg' to install/remove
# '.changes' describes changes made, partly generated from changelog and '.dsc'
$ lintian hello_*.dsc
$ lintian hello_*.deb
$ lesspipe hello_*.deb

# install
$ sudo dpkg --install hello_*.deb
$ /usr/bin/hello
Hello, world!
$ sudo apt-get remove hello

# re-create package from scratch, uses '.dsc,.orig,.debian'
$ dpkg-source -x *.dsc
$ cd hello* ; dpkg-buildpackage -rfakeroot

see also Debian New Maintainers’ Guide, Debian Binary Package Building HOWTO, The Debian Administrator’s Handbook and Packaging New Software @ ubuntu

  • debuild@man is a wrapper around dpkg-buildpackage + lintian (with configuration in_ /etc/devscripts.conf_ or ~/.devscripts), used to package new upstream releases. Use uupdate to upgrade a source code package from an upstream revision.
## usage
debuild [debuild options] binary|binary-arch|binary-indep|clean ...

## example
$ cd hello ; wget http://ftp.gnu.org/gnu/hello/hello-2.7.tar.gz
$ cd hello-2.6 ; uupdate -u hello-2.7.tar.gz
$ cd ../hello-2.7; debuild -rfakeroot
$ ls ../*2.7*
hello_2.7-0ubuntu1_amd64.build    hello_2.7-0ubuntu1.debian.tar.xz  hello-2.7.tar.gz
hello_2.7-0ubuntu1_amd64.changes  hello_2.7-0ubuntu1.dsc
hello_2.7-0ubuntu1_amd64.deb      hello_2.7.orig.tar.gz
$ debuild clean

## another example (package w/o make)
$ mkdir myapp-0.1 ; cd myapp*
$ echo -e '#!/bin/bashnecho Hello World' > hello.sh
$ dh_make -s --indep --createorig
$ rm debian/*.ex
# specify files to install
$ echo hello.sh /usr/bin > debian/install
# modify source format, since we arent using quilt packages
$ echo "1.0" > debian/source/format
# add any extra 'Depends' in 'debian/control'
$ cat debian/control
# build
$ debuild -us -uc
$ ls ../*deb
../myapp_0.1-1_all.deb
# test
$ sudo dpkg -i myapp_0.1-1_all.deb
$ /usr/bin/hello.sh 
Hello World
$ sudo dpkg -r myapp
  • pbuilder@man used for creating and maintaining chroot environment and building Debian package in the chroot environment. Use pdebuild@man as pbuilder + dpkg-buildpackage (build in the chroot).
## install
# sudo apt-get install pbuilder

## example
# create/update a base '/var/cache/pbuilder/base.tgz' with chroot env, 
$ sudo pbuilder clean
# note: use '--distribution sid' to switch distro
$ sudo pbuilder create
$ sudo pbuilder --update

# rebuild package
$ sudo pbuilder --build hello_2.7-0ubuntu1.dsc
$ ls /var/cache/pbuilder/result/
hello_2.7-0ubuntu1_amd64.changes  hello_2.7-0ubuntu1.debian.tar.xz  hello_2.7.orig.tar.gz
hello_2.7-0ubuntu1_amd64.deb      hello_2.7-0ubuntu1.dsc

# sign '.dsc' and '.changes'
$ cd /var/cache/pbuilder/result/
$ debsign hello_2.7-0ubuntu1_amd64.changes

# log into chroot env
$ sudo pbuilder --login --save-after-login

# to get sources/.dsc either
$ cat /etc/apt/sources.list
deb-src http://archive.ubuntu.com/ubuntu <ubuntu_version> main restricted universe multiverse
# or use dget from 'http://packages.ubuntu.com/' or 'http://packages.debian.org/'
$ dget http://ftp.de.debian.org/debian/pool/main/h/hello/hello_2.9-2.dsc
$ sudo pbuilder --build hello_2.9-2.dsc
$ ls /var/cache/pbuilder/result/hello_2.9*

# using pdebuild to debuild in chroot
$ apt-get source hello
$ cd hello-2.9 ; pdebuild

from Pbuilder Howto

  • Command/Tools hierarchy
    • debian/rules maintainer script for the package building
    • dpkg-buildpackage core of the package building tool
    • debuild dpkg-buildpackage + lintian (build under the sanitized environment variables)
    • pbuilder core of the Debian chroot environment tool
    • pdebuild pbuilder + dpkg-buildpackage (build in the chroot)
    • cowbuilder speed up the pbuilder execution
    • git-pbuilder the easy-to-use commandline syntax for pdebuild (used by gbp buildpackage)
    • gbp manage the Debian source under the git repo
    • gbp buildpackage pbuilder + dpkg-buildpackage + gbp

FPM

  • fpm@github “Effing Package Management” builds packages for multiple platforms (deb, rpm, etc) with great ease and sanity.
## install
$ sudo apt-get install ruby-dev gcc | sudo yum install ruby-devel gcc
$ sudo gem install fpm

## usage
fpm -s <source type> -t <target type> [list of sources]...
'-n name, -v version' package name and version
'-C chdir' change directory before searching files
'--prefix prefix' path to prefix when building  
'-d,--depends dep>version' dependencies
'-a,--architecture arch' architecture, usually 'uname -m'
'--inputs file' file with newline-separated list of files and dirs
'--before/after-install/remove/upgrade file' script to run at given stage 

## examples
# create noarch deb from dir content
$ echo -e '#!/bin/bashnecho Hello World' hello.sh
$ fpm -s dir -t deb -a all -n hello -v 1.0 --prefix /usr/bin hello.sh
$ dpkg -c *.deb
... ./usr/bin/hello.sh
$ sudo dpkg -i *.deb
$ . /usr/bin/hello.sh
Hello World
$ sudo dpkg -r hello

# create deb from python modulo using easy_install 
$ fpm -s python -t deb django
$ dpkg -c *.deb
... ./usr/local/lib/python2.7/dist-packages/django

# convert a local python package source
$ fpm -s python -t deb myproj/setup.py

# build gnu hello
$ wget http://ftp.gnu.org/gnu/hello/hello-2.9.tar.gz
$ tar -zxf hello-*.tar.gz
$ cd hello* ; ./configure --prefix=/usr ; make ; make install DESTDIR=/tmp/installdir
$ cd .. ; fpm -s dir -t deb -n hello -v 2.9 -C /tmp/installdir usr
$ dpkg -c hello*.deb
... ./usr/bin/hello

from fpm@wiki

How to create a SysVInit script (prior to systemd)

On systems based on SysVinit, init is the first process that is executed once the Linux kernel loads. Daemons are managed by shell scripts in /etc/init.d/. They call /etc/init.d/functions to start/stop the daemon process, and control its status via pid file. Optionally extra options are in /etc/sysconfig/ as env variables and configuration is placed in /etc/.

service@man runs (starts/stops/..) a System V init script, chkconfig@man is used to update and query on what runlevel services start.

$ cat /usr/bin/hello.sh
#!/bin/bash
while true; do logger "hello world"; sleep 10; done
$ chmod +x /usr/bin/hello.sh

## init script
$ cp /usr/share/doc/initscripts-*/sysvinitfiles /etc/init.d/hello
$ chmod +x /etc/init.d/hello
$ cat /etc/init.d/hello
#!/bin/sh
#
# <daemonname> <summary>
#
# chkconfig:   <default runlevel(s)> <start> <stop>
# description: <description, split multiple lines with a backslash>

# Source function library.
. /etc/rc.d/init.d/functions

exec="/usr/bin/hello.sh"
prog="hello"
config=""

[ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog

lockfile=/var/lock/subsys/$prog

start() {
    [ -x $exec ] || exit 5
    #[ -f $config ] || exit 6
    echo -n $"Starting $prog: "
    # if not running, start it up here, usually something like "daemon $exec"
    daemon $exec &
    retval=$?
    echo
    [ $retval -eq 0 ] && touch $lockfile
    return $retval
}

stop() {
    echo -n $"Stopping $prog: "
    # stop it here, often "killproc $prog"
    retval=$?
    echo
    [ $retval -eq 0 ] && rm -f $lockfile
    return $retval
}

restart() {
    stop
    start
}

reload() {
    restart
}

force_reload() {
    restart
}

rh_status() {
    # run checks to determine if the service is running or use generic status
    status $prog
}

rh_status_q() {
    rh_status >/dev/null 2>&1
}


case "$1" in
    start)
        rh_status_q && exit 0
        $1
        ;;
    stop)
        rh_status_q || exit 0
        $1
        ;;
    restart)
        $1
        ;;
    reload)
        rh_status_q || exit 7
        $1
        ;;
    force-reload)
        force_reload
        ;;
    status)
        rh_status
        ;;
    condrestart|try-restart)
        rh_status_q || exit 0
        restart
        ;;
    *)
        echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}"
        exit 2
esac
exit $?

## service start/stop/status
$ service hello start
Starting hello:
$ service hello status
hello (pid 17429) is running...
$ service hello stop

from SysVInitScript@fedora
see also SysVinit@arch

Using semantic patching with Coccinelle (a patching tool that knows C)

Coccinelle is a program matching and transformation engine which provides the language SmPL (Semantic Patch Language) for specifying desired matches and transformations in C code.

## install see http://coccinelle.lip6.fr/download.php
$ sudo apt-get install coccinelle | sudo yum install coccinelle (from fedora rawhide)

## usage
spatch -sp_file <SP> <files> [-o <outfile> ] [-iso_file <iso> ] [ options ]

## examples
$ cat test.cocci
// Replaces calls to alloca by malloc and checks return value
@@
expression E;
identifier ptr;
@@
-ptr = alloca(E);
+ptr = malloc(E);
+if (ptr == NULL)
+        return 1;

$ cat test.c
#include <alloca.h>
int main(int argc, char *argv[]) {
    unsigned int bytes = 1024 * 1024;
    char *buf;
    /* allocate memory */
    buf = alloca(bytes);
    return 0;
}

$ spatch -sp_file test.cocci test.c
--- test.c
+++ /tmp/cocci-output-29896-40280c-test.c
@@ -3,6 +3,8 @@ int main(int argc, char *argv[]) {
     unsigned int bytes = 1024 * 1024;
     char *buf;
     /* allocate memory */
-    buf = alloca(bytes);
+    buf = malloc(bytes);
+    if (buf == NULL)
+        return 1;
     return 0;
}

from coccinelle, coccinelle@lwn, coccinelle for the newbie and coccinelle patch examples

How to configure network in Linux (using initscripts/ifcfg/ifupdown, NetworkManager/nmcli, netctl and systemd-networkd)

You can use network managers (nmcli,…), initscripts/systemd or call user-land network command line tools directly (iproute2/net-tools) to configure your network devices either static or dynamic/dhcp.

  • ifup/ifdown@man network interface configuration files used by ifup/ifdown, called by initscripts (used prior to systemd). Works all distros (but configuration file location and syntax changes).
## fedora/rhel
$ cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=myhostname
#static
#GATEWAY=192.168.1.1
$ cat /etc/sysconfig/network-scripts/ifcfg-eth0 
DEVICE=eth0
ONBOOT=yes
#dynamic
BOOTPROTO=dhcp
#static
#BOOTPROTO=static
#IPADDR=192.168.1.2
#NETMASK=255.255.255.0
$ sudo /etc/init.d/network restart

## debian/ubuntu
$ sudo apt-get install ifupdown
$ man interfaces
$ cat /etc/network/interfaces
auto lo
iface lo inet loopback
#dynamic
#auto eth0
#    allow-hotplug eth0
#    iface eth0 inet dhcp
#static
auto eth0
iface eth0 inet static
    address 192.168.1.2
    netmask 255.255.255.0
    gateway 192.168.1.1
$ sudo networking restart

## dns (static)
$ cat /etc/resolv.conf
search mydomain.net
nameserver 10.254.0.121

## manual, non-persistent static using iproute2/net-tools
$ ip link set eth0 up 
$ ip addr add 192.168.1.2/24 broadcast 192.168.1.255 dev eth0
$ ip route add default via 192.168.1.1
$ ifconfig eth0 up
$ ifconfig eth0 192.168.1.2/24 broadcast 192.168.1.255
$ route add default gw 192.168.1.1 eth0

from NetworkConfiguration@debian, networkscripts@rhel and static ip@nixcraft

$ sudo yum install NetworkManager | sudo apt-get install network-manager | sudo pacman -Sy networkmanager

# nmcli manages all interfaces/devices except 'NM_CONTROLLED=no' in '/etc/sysconfig/network-scripts/ifcfg-*'
$ nmcli dev show ; nmcli con show

## unattened add/modify
nmcli con add type ethernet con-name <con-name> ifname <interface-name> ?ip4 <address/netmask> gw4 <defaultgw>?
nmcli con mod <con-name> +|-<setting>.<property> "<value>"
## interactive edit
nmcli con edit <con-name>
> goto ipv4 ; set addresses ip/mask defgw ; set dns ip1 ip2 ; set dns-search domain ; verify ; save persistent ; quit

## static
$ nmcli con mod ens192 ipv4.addresses "10.4.2.108/8 10.254.0.2"
$ nmcli con mod ens192 ipv4.dns "10.254.0.121 10.254.0.122" ipv4.dns-search "mydomain"

## dynamic 
$ nmcli con mod ens192 ipv4.method auto

## hostname
$ nmcli general hostname <myhostname>

$ systemctl restart NetworkManager | service network-manager restart
or
$ systemctl restart network.service | service networking restart

from configure static ip in centos7@xmodulo

  • netctl@arch is a CLI-based tool used to configure and manage network connections via profiles. Arch only.
## install
$ sudo pacman -Sy netctl

## static
$ cp /etc/netctl/examples/ethernet-static /etc/netctl/profile1
$ cat /etc/netctl/profile1
Description='A basic static ethernet connection'
Interface=eth0
Connection=ethernet
IP=static
Address=('192.168.1.23/24' '192.168.1.87/24')
#Routes=('192.168.0.0/24 via 192.168.1.2')
Gateway='192.168.1.1'
DNS=('192.168.1.1')

## dynamic
$ cp /etc/netctl/examples/ethernet-dhcp /etc/netctl/profile1
Description='A basic dhcp ethernet connection'
Interface=eth0
Connection=ethernet
IP=dhcp

$ netctl start profile1

## enable on boot
# either enable by profile; this creates and enables a systemd service on boot
$ netctl enable profile1
# or enable all profiles with eth0, needs 'pacmman -Sy ifplugd'
$ systemctl enable netctl-ifplugd@eth0.service
  • systemd.network as of version 210, systemd supports basic network configuration through udev and networkd.
$ systemctl enable systemd-networkd ; systemctl restart systemd-networkd

## static
$ cat /etc/systemd/network/10-static.network
[Match]
Name=ens32
[Network]
Address=10.4.2.111/8
Gateway=10.254.0.2
DNS=10.254.0.121
DNS=10.254.0.122
Domains=mydomain

## dynamic
$ cat /etc/systemd/network/20-dhcp.network
[Match]
Name=en*
[Network]
DHCP=v4

## hostname
$ hostnamectl set-hostname myhostname

from systemd-networkd@arch and systemd-networkd@coreos