sysadmin

How to monitoring/show the progress of CLI tools in Linux (using cv and pv)

cv/coreutils viewer@github supports all basic utilities in coreutils package. It is written in C and shows the progress as percentage. Works by scanning /proc for commands that it supports, checks directories fd and fdinfo for opened files and seek positions and finally reports the progress for the largest file.

$ apt-get|yum|yaourt cv

'-m,--monitor' loop while monitored processes are still running
'-w,--wait' estimate I/O throughput and ETA
'-c,--command command' monitor only this command

# see all current and upcoming coreutils commands
$ watch cv -q
# see downloads progress
$ watch cv -wc firefox
# launch and monitor any heavy command using '$!'
$ cp bigfile newfile & cv -mp $!

from cv: progress bar for cp, mv, rm, dd…

pv/pipe viewer/pv@man is a terminal-based tool for monitoring the progress of data through a pipeline.

$ apt-get|yum|pacman install pv

command1 | pv | command2
pv input.file | command1 | pv > output.file

# display options
'-p,--progress/-t,--timer/-e,--eta/-b,--byte' show progress/timer/ETA/bytes
# output modifiers
'-N,--name NAME' prefix the output information
'-c,--cursor' use cursor positioning escape sequences
'-l,--line-mode' instead of counting bytes, count lines
'-s,--size SIZE' assume the total amount of data to be transferred is SIZE
# data transfer modifiers
'-L,--rate-limit RATE' limite transfer rate by bytes/sec
'-B,--buffer-size BYTES'

# watch how quickly a file is transferred using nc
$ pv file | nc -w 1 host 3000
# see progress of both pipes
$ pv -cN rawlogfile file.log | gzip | pv -cN gziplogfile > file.log.gz
# with ncurses's dialog
$ (pv -n backup.tar.gz | tar xzf - -C path/to/data ) 2>&1 | dialog --gauge "Running tar, please wait..." 10 70 0
# rsync and pv
$ rsync options source dest | pv -lpes Number-Of-Files

from pv@nixcraft and rsync and pv@nixcraft

Advertisements

How to remote syslog in Linux (using rsyslog)

rsyslog is an open-source implementation of syslog protocol / rfc3164 and extends it with content-based filtering, rich filtering capabilities, flexible configuration options and adds features such as using TCP for transport. Its used prior to migration to systemd-journald.

  • Facility level is type of processes to monitor: auth, cron, daemon, kernel, local0..local7
  • Severity/Priority level is type of log message: emerg/0, alert/1_, crit/2, err/3, warn/4, notice/5, info/6, debug/7
  • Destination is either local file or remote rsyslog server @ip:port

As a rsyslog client it can filter and sends internal log messages to either local file system or a remote rsyslog server. As rsyslog server it collects logs from other hosts and sends them into internal log messages. See syslogserver@windows.

$ yum install rsyslog | apt-get install rsyslog | pacman -S rsyslog

##(server) enable listener
$(host1) vi /etc/rsyslog.conf
# udp
$ModLoad imudp 
$UDPServerRun 514
# tcp (slower but more reliable)
$ModLoad imtcp 
$InputTCPServerRun 514 

##(server) create template to log to filesystem
# see http://linux.die.net/man/5/rsyslog.conf
$(host1) vi /etc/rsyslog.d/remote_host
# log everything to 'host/progname.log'
$template RemoteLogs,"/var/log/%HOSTNAME%/%PROGRAMNAME%.log" *
# format it '[facility-level].[severity-level] ?RemoteLogs'
*.* ?RemoteLogs 
# stop processing messages
& ~

# same but using ip
$ vi /etc/rsyslog.d/remote_ip
$template IpTemplate,"/var/log/%FROMHOST-IP%.log" 
*.*  ?IpTemplate 
& ~

##(client) route all messages to remote server
$(host2) vi /etc/rsyslog.d/route_all
*.*  @host1:514 
# same but using tcp instead
#*.*  @@host1:514
# same but only for some kernel facility
kern.* @192.168.1.25:514

$(both) service rsyslog restart | systemctl restart rsyslog

from rsyslog server@xmodulo and rsyslog client@xmodulo

syslog(3) is the syscall used to send messages to system logger. There are wrappers in all languages, including shells

## from shell
# see http://linux.die.net/man/1/logger
$ logger -p local0.info -t PROGNAME MESSAGE

## forward journald to local syslog daemon
# see http://www.freedesktop.org/software/systemd/man/journald.conf.html
$ vi {/etc,/run,/usr/lib}/systemd/journald.conf.d/*.conf
ForwardToSyslog=True 
# same as kernel command line option 'systemd.journald.forward_to_syslog=True'

Using homebrew and cask package managers in OSX

Homebrew is a package management system for OSX. It installs packages in its own cellar directory and symblinkes to /usr/local.

Homebrew terminology: Formula is the package definition, keg is formula installation prefix, cellar is where all kegs are installed, tap is optional formula repositories, bottle is pre-built binary keg that can be unpacked.

## install
$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
$ brew --version

## using
# update formulas
$ brew update
# install/uninstall/upgrade/info/search packages
$ brew install|uninstall|upgrade|info|search "package"
# list installed
$ brew list

## contributing
# forked https://github.com/Homebrew/homebrew and clone
$ git clone https://github.com/<username>/homebrew.git
# create new formula
$ brew create https://foo.com/bar-1.0.tgz
$ vi $HOMEBREW_REPOSITORY/Library/Formula/foo.rb
$ git checkout -b foo
$ git commit Library/Formula/foo.rb && git push
# open pull request

## using tap, optional repository git formulas
$ brew tap homebrew/science
$ brew install "formula"

from homebrew, formula cookbook, brew tap and interesting taps and branches

Homebrew cask extends homebrew and brings its elegance, simplicity, and speed to OS X applications and large binaries alike. Applications are kept in their Caskroom under /opt and symblinked to $HOME/Applications.

## install
$ brew install caskroom/cask/brew-cask

## using
$ brew cask install google-chrome
$ open ~/Applications/"Google Chrome.app"

## contributing
# forked https://github.com/caskroom/homebrew-cask/fork and clone
$ git clone https://github.com//homebrew-cask.git
# add official repo as remote
$ git remote add upstream https://github.com/caskroom/homebrew-cask.git
# symb link to use brew cask in your private repo
$ $HOME/homebrew-cask/developer/bin/develop_brew_cask
# switch back to official repo to run 'brew update'
$ $HOME/homebrew-cask/developer/bin/production_brew_cask

from homebrew cask and hacking on homebrew-cask

How to mass rename/copy/link files in Linux (using mmv and rename)

  • mmv moves (or copies, appends, or links, as specified) each source file matching a from pattern to the target name specified by the to pattern.
$(el) yum install mmv (NUX)
$(deb) apt-get install mmv
$(arch) yaourt -S mmv

# '*.jpeg' -> '*.jpg'
$ mmv '*.jpeg' '#1.jpg'
# '.html.en' -> 'en.html'
$ mmv '*.html.??' '#1.#2#3.html'
  • rename@man/rename@ubuntu is a perl script which can be used to mass rename files according to a regular expression.
$(el) yum install util-linux-ng
$(deb) apt-get install perl
$(arch) pacman -S perl-rename

# '*.php' -> '*.html'
$ rename -n 's/.php$/.html/' *.php
# upper case -> lower case
$ rename 'y/A-Z/a-z/' *
# strip '.bak' extension
$ rename 's/.bak$//' *.bak

from rename@cyberciti and rename@tecmint

How to do network bridging in Linux (using initscripts/ifcfg/ifupdown, brctl/bridge-utils, nmcli/networkmanager, netctl/arch and systemd-networkd)

Network bridge is Link Layer device which forwards traffic between networks based on MAC addresses and is therefore also referred to as a Layer 2 device.

It makes forwarding decisions based on tables of MAC addresses which it builds by learning what hosts are connected to each network. A software bridge can be used within a Linux host in order to emulate a hardware bridge, for example in virtualization applications for sharing a NIC with one or more virtual NICs.

  • ifup/ifdown@man network interface configuration files used by ifup/ifdown, called by initscripts (used prior to systemd). Works all distros (but configuration file location and syntax changes).
$ apt-get|yum install bridge-utils

# disable network manager
$ sudo service NetworkManager stop ; sudo chkconfig NetworkManager off

## fedora/rhel
$ vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BRIDGE=br0
$ vi /etc/sysconfig/network-scripts/ifcfg-br0
DEVICE=br0
NM_CONTROLLED=yes
ONBOOT=yes
TYPE=Bridge
# either static
BOOTPROTO=none
IPADDR=10.10.1.105
NETMASK=255.255.255.0
GATEWAY=10.10.1.1
DNS1=8.8.8.8
DNS2=8.8.4.4
# or dhcp
#BOOTPROTO=dhcp
$ service network restart

## bridging in debian/ubuntu
$ vi /etc/network/interfaces
#auto eth0
#iface eth0 inet dhcp
# either static
iface br0 inet static
  bridge_ports eth0 eth1
  address 10.10.1.105
  broadcast 10.10.1.255
  netmask 255.255.255.0
  gateway 10.10.1.1
# or dhcp
auto br0
iface br0 inet dhcp
  bridge_ports eth0
  bridge_stp off
  bridge_fd 0
  bridge_maxwait 0
$ service networking restart

## manual, non-persistent using brctl
$ brctl addbr br0
$ brctl addif br0 eth0
# assign ip to bridge
$ ip link set dev br0 up
$ ip addr add dev br0 10.10.1.105/24
# delete
$ ip link set dev eth0 promisc on
$ ip link set dev eth0 master br0

## manual, non-persistent using iproute2/net-tools
$ ip link add name br0 type bridge
$ ip link set dev eth0 promisc on
$ ip link set dev eth0 master br0
# assign ip to bridge
$ ip link set dev br0 up
$ ip addr add dev br0 10.10.1.105/24
# delete
$ ip link set eth0 promisc off
$ ip link set dev eth0 nomaster
$ ip link delete br0 type bridge

from bridge@rhel, bridge@debian and bridge@rhel

# install
$ sudo yum install NetworkManager | sudo apt-get install network-manager | sudo pacman -Sy networkmanager

# create bridge
$ nmcli con add type bridge autoconnect yes con-name br0 ifname br0

# assign ip either static
$ nmcli con mod br0 ipv4.addresses "10.10.1.105/24 10.10.1.1" ipv4.method manual 
$ nmcli con mod br0 ipv4.dns "8.8.8.8 8.8.4.4"
# or dhcp
$ nmcli con mod br0 ipv4.method auto

# remove current setting and add interface to bridge
$ nmcli c delete eth0
$ nmcli c add type bridge-slave autoconnect yes con-name eth0 ifname eth0 master br0

$ systemctl restart NetworkManager | service network-manager restart

from nmcli@rhel7

  • netctl@arch is a CLI-based tool used to configure and manage network connections via profiles. Arch only.
## install
$ sudo pacman -Sy netctl

## create 'br0' with real eethernet adaptor 'eth0' and 'tap0' tap device
$ cp /etc/netctl/examples/bridge /etc/netctl/bridge
$ vi /etc/netctl/bridge
Description="Example Bridge connection"
Interface=br0
Connection=bridge
BindsToInterfaces=(eth0 tap0)
# either static
IP=static
Address='192.168.10.20/24'
Gateway='192.168.10.200'
SkipForwardingDelay=yes # ignore (R)STP and immediately activate the bridge
# or dynamic
#IP=dhcp

$ netctl enable bridge ; netctl start bridge

from Bridge with netctl@arch

  • systemd.network as of version 210, systemd supports basic network configuration through udev and networkd.
# disable network manager
$ systemctl disable NetworkManager
# enable daemons
$ systemctl enable systemd-networkd
$ systemctl restart systemd-networkd
$ systemctl enable systemd-resolved
$ ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf

# create bridge
$ vi /etc/systemd/network/br0.netdev 
[NetDev]
Name=br0
Kind=bridge
$ vi /etc/systemd/network/br0.network
[Match]
Name=br0
[Network]
# static
DNS=192.168.250.1
Address=192.168.250.33/24
Gateway=192.168.250.1
# or dynamic
#DHCP=v4

# assign network adaptor
$ vi /etc/systemd/network/uplink.network
[Match]
Name=en*
[Network]
Bridge=br0

# using in container
$ systemd-nspawn --network-bridge=br0 -bD /path_to/my_container

from Network bridge@arch

How to generate SSL CSR (certificate signing request) and self-signed certificates for Apache/Nginx (using OpenSSL)

  • OpenSSL is an open-source implementation of the SSL and TLS protocols.
'req' PKCS#10 certificate request and certificate generating utility.
'-x509' outputs a self signed certificate instead of a certificate request
'-newkey alg:file' creates a new certificate request and a new private key
'-keyout filename' filename to write the newly created private key to
'-out filename' filename to write to
'-days n' number of days to certify the certificate for, defaults to 30 for x509

# create private key 'key.pem' and generate a certificate signing request 'req.pem'
$ openssl req -newkey rsa:1024 -keyout key.pem -out req.pem
or
$ openssl genrsa -out key.pem 1024 ; openssl req -new -key key.pem -out req.pem

# generate a self signed root certificate 'cert.pem' and private key 'key.pem'
$ openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 365

from openssl-req@man

'-nodes' if a private key is created it will not be encrypted

# generate a self signed root certificate '$CERT.csr' for apache, and private key '$CERT.key'
$ export CERT=/etc/httpd/ssl/server
$ openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:2048 -out $CERT.key
$ chmod 600 $CERT.key
$ openssl req -new -key $CERT.key -out $CERT.csr
$ openssl x509 -req -in $CERT.csr -signkey $CERT.key -out $CERT.crt -days 365
# edit SSLCertificateFile $CERT.crt and SSLCertificateKeyFile $CERT.key

# same
$ export CERT=/etc/httpd/ssl/server
$ openssl req -x509 -nodes -newkey rsa:2048 -keyout $CERT.key -out $CERT.crt -days 365

# same but using 'make testcert'
$ cd /usr/share/ssl/certs ; make testcert

# same but using 'crypto-utils'
$ sudo yum install crypto-utils | sudo apt-get install crypto-utils
$ genkey your_FQDN
# edit SSLCertificateFile and SSLCertificateKeyFile

from How to Create Self-Signed SSL Certificates and Keys for Apache

$ nginx -V
TLS SNI support enabled
$ mkdir -p /etc/nginx/ssl/ ; cd $_

# create private key; asks for passphrase
$ openssl genrsa -des3 -out self-ssl.key 2048
# create a certificate signing request - CSR
$ openssl req -new -key self-ssl.key -out self-ssl.csr
# optional remove passphrase
$ cp -v self-ssl.{key,original} ; openssl rsa -in self-ssl.original -out self-ssl.key ; rm -v self-ssl.original
# create certificate
$ openssl x509 -req -days 365 -in self-ssl.csr -signkey self-ssl.key -out self-ssl.crt
# configure nginx
$ cat etc/nginx/virtual/.conf
server {
  listen 443;
  ssl on;
  ssl_certificate /path/to/self-ssl.crt;
  ssl_certificate_key /path/to/self-ssl.key;
  server_name theos.in;
}

# verify certificates
$ openssl verify pem-file
$ openssl verify self-ssl.crt

from HowTo: Create a Self-Signed SSL Certificate on Nginx For CentOS / RHEL

How to load balance an HTTP server (using with HAProxy or Pound)

HAProxy

HAProxy/haproxy@man is a TCP/HTTP reverse proxy which is particularly suited for high availability environments.

## install
$ sudo apt-get install haproxy | sudo yum install haproxy

## configure
$ vi /etc/haproxy/haproxy.cfg
global
  daemon                             # fork into background
  maxconn 256                        # max per-process concurrent connections
defaults    
  mode http                          # 'tcp' for layer4 ssl,ssh,smtp; 'http' for layer7
  timeout connect 5s
  timeout client 50s
  timeout server 50s
frontend http-in
   bind *:80                         # frontend bindto 'ip:port' listener
   reqadd X-Forwarded-Proto: http    # add http header to request
   default_backend servers           # backend used when no "use_backend" has been matched
backend servers
   stats enable
   stats hide-version
   stats uri /stats
   stats realm Haproxy Statistics
   stats auth haproxy:redhat         # credentials for HAProxy Statistic report page
   balance roundrobin                # roundrobin according to their using weights
   cookie LB insert
   server web1-srv 192.168.0.121:80 cookie web1-srv check
   server web2-srv 192.168.0.121:80 cookie web2-srv check
   server web3-srv 192.168.0.123:80 cookie web3-srv check
   server web4-srv 192.168.0.124:80 check backup
   server server1 127.0.0.1:8000 maxconn 32 

from haproxy@tecmint, haproxy@xmodulo, haproxy@digitalocean and haproxy@doc

  • Load balancing algorithms used to select a backend: roundrobin according to their weights, static-rr same as roundrobin but changing server’s weight on the fly has no effect, lastconn lowest number of connections is selected, first available with connection slots is selected, source hashs source ip hash ensuring same client ip reaches same server, uri hashs part or whole uri to select server, url_param hashs url get or post parameter value to select server, hrd() hashs http header value.
balance roundrobin
balance url_param userid
balance url_param session_id check_post 64
balance hdr(User-Agent)
balance hdr(host)
balance hdr(Host) use_domain_only
  • Using ACLs to select backends: default_backend specify the backend to use when no use_backend rule has been matched, use_backend switch to a specific backend if/unless an ACL-based condition is matched
# use_backend <backend> [{if | unless} <condition>]

# by url
acl url_blog path_beg /blog
use_backend blog-backend if url_blog
default_backend web-backend
  • Server options/params: backup only use this server if all other non-backups are unavailable, cookie cookie value assigned to server used for persistency/affinity, check server availability by making periodic tcp connections, inter interval in ms between checks, maxconn if number of incoming concurrent requests goes higher than this value they will be queued, maxqueue * maximal number of connections which will wait in server queue, *ssl enables SSL ciphering on outgoing connections, *weight * server’s weight relative to other servers
# server <name> <address>[:[port]] [param*]
server first  10.1.1.1:1080 cookie first  check inter 1000
server second 10.1.1.2:1080 cookie second check inter 1000
server transp ipv4@
server backup ${SRV_BACKUP}:1080 backup
server www1_dc1 ${LAN_DC1}.101:80
server www1_dc2 ${LAN_DC2}.101:80
  • Cookie-based backend persistence/affinity: rewrite cookie is provided by server and haproxy should modify it, insert cookie is provided/inserted by haproxy, prefix use an exiting cookie, indirect no cookie will be emitted to a client which already has a valid one for the server which has processed the request, preserve emit a persistent cookie
#cookie <name> [options*]
cookie JSESSIONID prefix
cookie SRV insert indirect nocache
cookie SRV insert postonly indirect
cookie SRV insert indirect nocache maxidle 30m maxlife 8h
  • Stats: stats admin enables the statistics admin level if/unless a condition is matched, stats auth enables statistics with default settings, and restricts access to declared users only
## stats admin { if | unless } <cond>
# enable stats only for localhost
backend stats_localhost
  stats enable
  stats admin if LOCALHOST
# statistics admin level always enabled because of the authentication
backend stats_auth
  stats enable
  stats auth  admin:AdMiN123
  stats admin if TRUE

## stats auth <user>:<passwd>
# public access (limited to this backend only)
backend public_www
  server srv1 192.168.0.1:80
  stats enable
  stats hide-version
  stats scope   .
  stats uri     /admin?stats
  stats realm   Haproxy Statistics
  stats auth    admin1:AdMiN123
  stats auth    admin2:AdMiN321
# internal monitoring access (unlimited)
backend private_monitoring
  stats enable
  stats uri     /admin?stats
  stats refresh 5s
  • Logs: log adds a global syslog server, facility syslog facilities
global
  # log <address> [len <length>] <facility> [max level [min level]]
  log 127.0.0.1 local2

# enable udp syslog receiver and facility
$ cat /etc/rsyslog.d/haproxy.conf
local2.*    /var/log/haproxy.log
$ vi /etc/rsyslog.conf
$ModLoad imudp
$UDPServerRun 514
$ service rsyslog restart 
  • SSL: terminating/decrypting an SSL connection at the load balancer and sending unencrypted connections to the backend servers, pass-through sends SSL connections directly to the proxied servers is more secure but losses ability to get X-Forwarded- headers
$ sudo yum install openssl | sudo apt-get install openssl
# generate self-signed certificate for ssl termination only
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/server.key -out /etc/ssl/server.crt
$ cat server.crt server.key > server.pem

## ssl termination
defaults
  mode http
frontend localhost
  bind *:80
  bind *:443 ssl crt server.pem
  redirect scheme https if !{ ssl_fc }   # ssl-only: redirect from http to https
  default_backend nodes

## ssl pass-through
defaults
  option tcplog
  mode tcp          # need to be tcp since haproxy treats connections just as a stream
frontend localhost
  bind *:80
  bind *:443
  default_backend nodes

from haproxy with ssl

Pound

pound/pound@man is a lightweight open source reverse proxy, load balancer and SSL wrapper used as a web server load balancing solution:

  • Listeners: how (ip+port) to receive requests from the clients, HTTP and/or HTTPS
  • Services: matching requests (by URL pattern, HTTP header and/or session) with list of back ends. Session can be matched by ip address, basic authentication, url parameter, cookie value or http header
  • Backend: list of (ip+port) web servers, optionally with priority
## install
$ sudo yum install pound (EPEL) | sudo apt-get install pound | cd /usr/ports/www/pound/ && make install clean

## configure
$ vi /etc/pound/pound.cfg | vi /etc/pound.cfg
# global options
User        "www-data"
Group       "www-data"
# logging (goes to syslog by default)
LogLevel    1
# check backend every X secs:
Alive       30

# main listening ports
ListenHTTP
  Address 202.54.1.5
  Port    80
End
ListenHTTPS
  Address 202.54.1.5
  Port    443
  Cert    "/etc/ssl/local.server.pem"
End

# image/static server
Service
  URL ".*.(jpg|gif)"
  BackEnd
    Address 192.168.1.10
    Port    80
  End
End

# virtual host www.mydomain.com (url-based session)
Service
  URL         ".*sessid=.*"
  HeadRequire "Host:.*www.mydomain.com.*"
  BackEnd
    Address 192.168.1.11
    Port    80
  End
  Session
    Type    PARM
    ID      "sessid"
    TTL     120
  End
End

# everything else (cookie-based session)
Service
  BackEnd
    Address 192.168.1.5
    Port    80
    Priority 5
  End
  BackEnd
    Address 192.168.1.6
    Port    80
    Priority 4
  End
  Session
    Type    COOKIE
    ID      "userid"
    TTL     180
  End
End

# restart
$ /etc/init.d/pound restart

from pound@cyberciti

How to cache HDDs with SSDs in Linux (using bcache)

Bcache (or block level cache) is a Linux kernel block layer cache. It allows one or more fast disk drives such as flash-based solid state drives (SSDs) to act as a cache for one or more slower hard disk drives. It is designed around the performance characteristics of SSDs.

## install
$ sudo yum install bcache-tools | sudo apt-get install bcache-tools

## using, assing HDD is /dev/sda and an SSD is /dev/sdb
# wipe (from util-linux) devices
$ wipefs -a /dev/sda1 ; wipefs -a /dev/sdb1

# format backup/hdd and cache/ssd devices
$ make-bcache -B /dev/sda1 ; make-bcache -C /dev/sdb1

# attach the cache device to our bcache device 'bcache0'
$ echo C_Set_UUID_VALUE > /sys/block/bcache0/bcache/attach

# create and mount fs
$ mkfs.ext4 /dev/bcache0
$ mount /dev/bcache0 /mnt

# optionally use faster writeback (instead of default writethrough)
$ echo writeback > /sys/block/bcache0/bcache/cache_mode
# same but permanently
$ echo /dev/sda1 > /sys/fs/bcache/register

# monitor 
$ bcache-status -s

from bcache and/vs. LVM cache

Useful CLI tools for Linux system devops/admins and developers

Here is a list of commands used by sysadmins and developers to do just about anything from the CLI. From backup, to network configuration, compression, compilation, debugging, package management, process management, text editing, …

Backup/Copy Tools
* dd
* duplicity
* rsync/scp
* rdiff-backup
* rsnapshot
* unison

Command Interpreters (Shells)
* bash
* csh
* dash/ash
* mc
* zsh

Compiler Tools
* autoconf/automake
* clang
* gcc/ld/as/gdb
* make

Compression and Archiving Tools
* 7z
* bzip2
* gzip
* pax
* tar
* zip
* xz

Daemon Tools
* service
* systemctl/systemd doc/systemd arch/systemd debian/systemd cheatsheet
* upstart

Download Tools
* axel
* curl
* lftp
* wget

File Managers
* mc
* ranger
* tree
* vifm

Hardware Tools
* inxi
* lspci
* lshw

Logging
* ccze
* logrotate
* rsyslog

Network active monitoring
* arp-scan
* iperf
* netcat/socat
* ping/tcpping
* sprobe
* tracepath/traceroute

Network configuration
* dig/nslookup
* ip/route/ifconfig
* ipcalc/sipcalc/whatmask
* tc

Network packet sniffing
* dhcpdump
* dsniff
* iptraf
* httpry
* ngrep
* p0f
* pktstat
* snort
* tcpdump
* tshark

Network passive flow/stats
* bmon
* iftop
* lsof
* nethogs
* netstat/ss
* speedometer
* speedtest-cli
* tcptrack
* vnstat

Online Resources
* commandlinefu
* free programming books

Package Management Tools
* apt-get/aptitude/dpkg
* yum/rpm
* pacman
* zypper
* pkg(BSD)

Performance Monitoring Tools
* dstat
* iotop
* iostat
* httpry
* nethogs
* ngxtop
* ps
* sar
* smem
* top/htop/atop

Processor Management Tools
* kill/killall/pkill
* nice/renice
* pgrep
* taskset

Productivity Tools
* byobu
* cal
* cheat
* cmus
* fortune
* mutt
* pv
* screen
* screenfetch
* ssh
* tmux
* weather/WMO/ICAO
* weechat/irssi

Security Tools
* getfacl/setfacl
* lynis
* nmap
* iptables
* passwd/apg

Source Control
* git
* hg
* svn

Storage Tools
* lvm
* mount

Text Processing Tools
* awk
* diff/patch
* grep/ack
* tail/multitail/ztail
* sed

Text Editors
* emacs
* nano
* vim

from Useful CLI tools for Linux system admins

Using cloud-init and uvtool to initialize cloud instances (including local Fedora/Ubuntu cloud images)

CloudInit is the defacto multi-distribution package that handles early initialization of a cloud instance. Its a kind of “chef/puppet/kickstart/anaconda” for cloud images. cloud-init behavior can be configured via user-data and injected into image from a datasource.

$ nano user-data
#cloud-config

# User and Group Management
users:
  - name: demo
    groups: sudo
    shell: /bin/bash
    sudo: ['ALL=(ALL) NOPASSWD:ALL']
    ssh-authorized-keys:
      - ssh-rsa ... user@example.com
groups:
  - group1
  - group2: [user1, user2]

# Change Passwords for Existing Users
chpasswd:
  list: |
    user1:password1
    user2:password2
    user3:password3
  expire: False

# Writing out arbitrary files
write_files:
  - path: /test.txt
    content: |
      Here is a line.
      Another line is here.

# Run apt or yum upgrade
package_upgrade: true

# Install arbitrary packages
packages:
  - package_1
  - package_2
  - [package_3, version_num]

# Add apt repositories
apt_mirror: http://us.archive.ubuntu.com/ubuntu/

# Configure instances ssh-keys
ssh_authorized_keys:
  - ssh-rsa ...

# Run Arbitrary Commands for More Control
runcmd:
  - [ sed, -i, -e, 's/here/there/g', some_file]
  - echo "modified some_file"
  - [cat, some_file]

# Adjust mount points mounted
mounts:
 - [ ephemeral0, /mnt, auto, "defaults,noexec" ]
 - [ sdc, /opt/data ]
 - [ xvdh, /opt/data, "auto", "defaults,nobootwait", "0", "0" ]
 - [ dd, /dev/zero ]

# Shutdown or Reboot the Server
power_state:
  timeout: 120
  delay: "+5"
  message: Rebooting in five minutes. Please save your work.
  mode: reboot

from cloud-init scripting and cloud config examples

Injecting depends on datasource (EC2, vSphere, …)

# amazon ec2
$ ec2-run-instances --user-data-file or via magic '169.254.169.254' address

# config-drive: un-partitioned block device filesystem label 'config-2', see http://cloudinit.readthedocs.org/en/latest/topics/datasources.html#config-drive
$ mkdir -p /tmp/new-drive/openstack/latest
$ cp user_data /tmp/new-drive/openstack/latest/user_data
$ mkisofs -R -V config-2 -o data.iso /tmp/new-drive
$ rm -r /tmp/new-drive

# vSphere/No cloud
% genisoimage -o user-data.iso -rock user-data meta-data

# coreos uses config-drive, see https://coreos.com/docs/cluster-management/setup/cloudinit-cloud-config/

Fedora and Ubuntu both provide compact cloud images that are useful for spinning up small VM’s quickly (much quicker than installing from a huge ISO or even net-install).

#0 - install
$ sudo apt-get install qemu-kvm genisoimage cloud-utils | sudo yum install genisoimage qemu-kvm cloud-utils (EPEL)

#1 - download
$ export URL="https://cloud-images.ubuntu.com/utopic/current/utopic-server-cloudimg-amd64-disk1.img"
or
$ export URL="http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Atomic-20141203-21.x86_64.raw.xz"
$ export IMG=`basename $URL`
$ wget|axel $URL -o $IMG
$ [[ $IMG =~ \.xz$ ]] && xz -d $IMG

#2 - optionally (otherwise done on reads) compressed qcow file '.img' to a uncompressed qcow2 '.raw'
$ [[ $IMG =~ \.img ]] && qemu-img convert -O qcow2 $IMG $IMG\.raw

#3 - create the disk with NoCloud data on it
$ { echo instance-id: iid-local01; echo local-hostname: cloudimg; } > meta-data
$ printf "#cloud-config\npassword: passw0rd\nchpasswd: { expire: False }\nssh_pwauth: True\n" > user-data
$ export SEED="seed.img" ; cloud-localds $SEED user-data
or
$ export SEED="seed.iso" ; genisoimage -output $SEED -volid cidata -joliet -rock user-data meta-data

$4 - optionally, create a new qcow image to boot, backed by your original image
$ qemu-img create -f qcow2 -b $IMG\.raw boot-dist.img

#5 - boot a kvm, otionaly append '-display sdl or -display curses'
# use 'ubuntu'/'fedora' users for login
# use 'sudo loadkeys pt'
$ kvm -m 512 -drive file=boot-dist.img,if=virtio -drive file=$SEED,if=virtio \
    # (default) user-mode networking '-net user': allow outbound, needs redir for inbound
    -net nic -net user -redir :8090::80 -redir :8022::22
    # or bridged networking '-net tap': needs 'bridge-utils'
    -net nic -net tap
$ ssh -p 8022 user@localhost

from nocloud@cloud-init, UEC Images@ubuntu and Running cloud images locally

uvtool facilitates the task of generating virtual machines (VM) using the cloud images. Ubuntu 14.04 and later only.

# install
$ sudo apt-get install uvtool
$ sudo usermod -a -G libvirtd username
$ newgrp libvirtd

# download image from http://cloud-images.ubuntu.com, filter by 'release/arch'
$ export RELEASE=utopic
$ uvt-simplestreams-libvirt sync release=$RELEASE arch=amd64

# create, connect and destroy a VM
$ uvt-kvm create --wait $RELEASE-test release=$RELEASE --password=passw0rd
# or same but using public key authentication
$ ssh-keygen -f .ssh/id_rsa ; uvt-kvm create --wait $RELEASE-test release=$RELEASE --ssh-public-key-file=.ssh/id_rsa.pub
$ uvt-kvm list
$ ssh ubuntu@`uvt-kvm ip $RELEASE-test`
$ uvt-kvm destroy $RELEASE-test

uvt-kvm create [options] hostname [filter]
LIBVIRT DOMAIN DEFINITION OPTIONS
'--memory/--dist/--cpu' limit resources
CLOUD-INIT CONFIGURATION OPTIONS
'--password password' alternative to public key authentication
'--run-script-once script_file' run script_file as root on the VM the first time it is booted
'--packages package_list' install the comma-separated packages on first boot

# start instance using cloudinit user-data
$ nano user-data
#cloud-config
snappy:
  ssh_enabled: True
  packages:
    - htop
$ uvt-kvm create --wait $RELEASE-test --release=$RELEASE --user-data=user-data

# same but using snappy release
$ sudo apt-add-repository ppa:snappy-dev/tools ; sudo apt-get update ; sudo apt-get install uvtool
$ uvt-simplestreams-libvirt sync --snappy flavor=core release=devel
$ uvt-kvm create --wait snappy-test flavor=core
$ uvt-kvm ssh snappy-test
$ uvt-kvm destroy snappy-test

from Snappy Ubuntu Core and cloud-init