Docker
1. Installation
On Centos check SELinux with sestatus
and do setenforce 0
Docker
curl -sSL https://get.docker.com | sh
sudo usermod -aG docker andre
#
# new repo was added...
#
$ cat /etc/apt/sources.list.d/docker.list
deb [arch=amd64] https://apt.dockerproject.org/repo debian-jessie main
Compose
https://github.com/docker/compose/releases
2. First Steps
Basics
Hello World:
docker run debian echo hello world
Start Bash in container and attach:
docker run -it debian bash
root@02156d24da81:/#
Show containers:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
02156d24da81 debian "/bin/bash" About a minute ago Exited (0) 6 seconds ago practical_kilby
574cd4f977f4 debian "echo hello world" 5 minutes ago Exited (0) 5 minutes ago stupefied_bartik
Filter inspect json with Go template
docker inspect --format awesome_swartz
172.17.0.2
Show file system diff:
docker diff awesome_swartz
A /test
C /bin
D /bin/echo
Show logs:
docker logs awesome_swartz
root@99c4fcfc87bf:/# rm /bin/echo
root@99c4fcfc87bf:/# touch test
Delete container:
docker rm awesome_swartz
Delete all exited containers:
docker rm -v $(docker ps -aq -f status=exited)
Check Union File System:
docker info | grep Storage
# Storage Driver: overlay2
Redis Example
Get official redis image from the library:
docker pull redis
Start redis container like a daemon with -d :
docker run --name myredis -d redis
Start a second container for running redis-cli. The –link creates an /etc/hosts entry for myredis:
docker run --rm -it --link myredis redis /bin/bash
root@14f64e1dcf91:/data# redis-cli -h myredis
myredis:6379> ping
PONG
myredis:6379> set key 1234
OK
myredis:6379> save
OK
In the official redis Dockerfile there is VOLUME /data
. With another container, importing the myredis /data volume + mounting a backup volumes, we can create a backup of this data:
mkdir backup
docker run --rm --volumes-from myredis -v $(pwd)/backup:/backup debian cp /data/dump.rdb /backup/
ls backup
3. Fundamentals
Build Context
The Build Context contains the Dockerfile as well as files which are copied to the image. It can be a local directory like “.” or “git://…” repo or “github.com:/…”. But is is recommend to clone a repo locally instead of specifing a git url directly
Image Layers
Every command in a Dockerfile results in a new image layer in the Union File System:
docker history 4hel/cowtainer
IMAGE CREATED CREATED BY SIZE COMMENT
f5937212a838 6 hours ago /bin/sh -c #(nop) ENTRYPOINT ["/entrypoin... 0 B
ebb5a3d4d1da 6 hours ago /bin/sh -c #(nop) COPY file:423f47cb060a1b... 108 B
88b8b87fb529 6 hours ago /bin/sh -c apt-get update && apt-get insta... 48.1 MB
978d85d02b87 10 days ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0 B
<missing> 10 days ago /bin/sh -c #(nop) ADD file:41ac8d85ee35954... 123 MB
Caching
caching is used when there already exists a layer with the same parent and the same docker file command. So RUN git clone
might be cached and not get the latest result.
Base Image
debian is normal. busybox is really small. But scratch is a completely empty file system, where only one static binary is added.
To update to latest version of the base image one needs to explicitly do docker pull image
.
Dockerfile Reference
Can be found here: https://docs.docker.com/engine/reference/builder/
Volumes
Volumes are either created at runtime with docker run -v /data ...
or in in a Dockerfile using VOLUME /data
.
Per default they are mounted from a dir below /var/lib/docker, but the mount point in the host file system can be explecitly changed with -v HOSTDIR:CONTAINERDIR. In the Dockerfile it is not possible to set the host mount point for security and flexibility reasons.
Exec
In an entrypoint.sh script it is important to use bash exec
in order to replace the bash process instead of creating a child process. This garantees that Signals like SIGTERM are not swallowed by the parent process.
Socket Mounting
When running for example Jenkins in a container, it can be necessary for the container to create sibling containers on the host. This is made possible with socket mounting: The docker cli client needs to be available in the jenkins container, whith jenkins user in /etc/sudoers. The IPC-Socket file needs to mounted inside the jenkins container: docker run -v /var/run/docker.sock:/var/run/docker.sock ...
.
4. Logging
Default Output
By default, the STDOUT and STDERR of the Process are protocolled:
docker run --name logtest debian bash -c 'echo "stdout"; echo "stderr" >&2'
docker logs logtest
# stdout
# stderr
docker rm logtest
To follow the the stream -f is used:
docker run -d --name streamtest debian bash -c 'while true; do echo "tick"; sleep 1; done;'
18ea0bef55084e4061b69fdb74274a7e12bc12352918ff7481788c385df7dd1d
docker logs -f streamtest
# tick
# tick
# tick
# ...
Log Driver
The argument –log-driver can be passed to docker run:
-
json-file
Standard logging as shown above
-
syslog
using the standard syslog network protocol
-
journald
the driver for the systemd journal
-
gelf
using the Graylog Extended Log Format (GELF)
-
fluentd
forwarding logs to fluentd
-
none
disable logging
docker run --log-driver fluentd --log-opt tag="debug." -p 8080:8080 mylogger
docker run --log-driver syslog --log-opt syslog-facility=local0 --log-opt tag=mylogger -p 8080:8080 mylogger
5. Limit a container’s resources
By default, a container has no resource constraints and can use as much of a given resource as the host’s kernel scheduler allows.
Memory
hard limit Docker enforces that the container can not use more than the given amount of memory.
soft limit is only enforced under certain conditions when the host kernel detects low memory or contention. But it is not guaranteed that the container does not exceed the limit.
The options take a positive integer, followed by a suffix of b
, k
, m
, g
-m
or --memory=
: The amount of memory the container ca use. (hard limit)
--memory-reservation
Specifies a sot limit smaller than --memory
There is also --kernel-memory
which relates to cgroup kernel memory accounting: kernel.org
In theory, setting a kernel limit lower than the user limit would allow to overcommit the memory per cgroup. But in the cgroup-v1 implementation this does not work. But it is possible to set a kernel limit >= user limit.
CPU
The CFS is the Linux Kernel CPU scheduler for normal Linux processes.
Docker 1.13 or higher:
--cpus=<value>
Specify how many of the available CPUs the container can use:
docker run -it --cpus=".5" ubuntu /bin/bash
Docker 1.12 or lower:
docker run -it --cpu-period=100000 --cpu-quota=50000 ubuntu /bin/bash
Here the scheduler period is set 100.000 microseconds and the container is allowed to run up to 50.000 microseconds.
--cpu-shares
Set this flag to a value greater or less than the default of 1024 to increase or reduce the containers weight. It only prioritizes CPU Usage, but does not guarantee or reserve CPU access.