Docker

Topics related to Docker:

Getting started with Docker

Docker is an open-source project that automates the deployment of applications inside software containers. These application containers are similar to lightweight virtual machines, as they can be run in isolation to each other and the running host.

Docker requires features present in recent linux kernels to function properly, therefore on Mac OSX and Windows host a virtual machine running linux is required for docker to operate properly. Currently the main method of installing and setting up this virtual machine is via Docker Toolbox that is using VirtualBox internally, but there are plans to integrate this functionality into docker itself, using the native virtualisation features of the operating system. On Linux systems docker run natively on the host itself.

Running containers

Managing containers

  • In the examples above, whenever container is a parameter of the docker command, it is mentioned as <container> or container id or <CONTAINER_NAME>. In all these places you can either pass a container name or container id to specify a container.

Managing images

Building images

Docker swarm mode

Swarm mode implements the following features:

  • Cluster management integrated with Docker Engine
  • Decentralized design
  • Declarative service model
  • Scaling
  • Desired state reconciliation
  • Multi-host networking
  • Service discovery
  • Load balancing
  • Secure design by default
  • Rolling updates

For more official Docker documentation regarding Swarm visit: Swarm mode overview


Swarm Mode CLI Commands

Click on commands description for documentation

Initialize a swarm

docker swarm init [OPTIONS]

Join a swarm as a node and/or manager

docker swarm join [OPTIONS] HOST:PORT

Create a new service

docker service create [OPTIONS] IMAGE [COMMAND] [ARG...]

Display detailed information on one or more services

docker service inspect [OPTIONS] SERVICE [SERVICE...]

List services

docker service ls [OPTIONS]

Remove one or more services

docker service rm SERVICE [SERVICE...]

Scale one or multiple replicated services

docker service scale SERVICE=REPLICAS [SERVICE=REPLICAS...]

List the tasks of one or more services

docker service ps [OPTIONS] SERVICE [SERVICE...]

Update a service

docker service update [OPTIONS] SERVICE

Docker Data Volumes

Debugging a container

Inspecting a running container

Docker Machine

docker-machine manages remote hosts running Docker.

The docker-machine command line tool manages the full machine's life cycle using provider specific drivers. It can be used to select an "active" machine. Once selected, an active machine can be used as if it was the local Docker Engine.

Dockerfiles

Dockerfiles are of the form:

# This is a comment
INSTRUCTION arguments
  • Comments starts with a #
  • Instructions are upper case only
  • The first instruction of a Dockerfile must be FROM to specify the base image

When building a Dockerfile, the Docker client will send a "build context" to the Docker daemon. The build context includes all files and folder in the same directory as the Dockerfile. COPY and ADD operations can only use files from this context.


Some Docker file may start with:

# escape=`

This is used to instruct the Docker parser to use ` as an escape character instead of \. This is mostly useful for Windows Docker files.

Docker network

Data Volumes and Data Containers

Docker Engine API

Multiple processes in one container instance

Usually each container should hosts one process. In case you need multiple processes in one container (e.g. an SSH server to login to your running container instance) you could get the idea to write you own shell script that starts those processes. In that case you had to take care about the SIGNAL handling yourself (e.g. redirecting a caught SIGINT to the child processes of your script). That's not really what you want. A simple solution is to use supervisord as the containers root process which takes care about SIGNAL handling and its child processes lifetime.

But keep in mind, that this ist not the "docker way". To achive this example in the docker way you would log into the docker host (the machine the container runs on) and run docker exec -it container_name /bin/bahs. This command opens you a shell inside the container as ssh would do.

Docker Registry

Checkpoint and Restore Containers

Docker stats all running containers

Concept of Docker Volumes

People new to Docker often don't realize that Docker filesystems are temporary by default. If you start up a Docker image you'll get a container that on the surface behaves much like a virtual machine. You can create, modify, and delete files. However, unlike a virtual machine, if you stop the container and start it up again, all your changes will be lost -- any files you previously deleted will now be back, and any new files or edits you made won't be present.

Volumes in docker containers allow for persistent data, and for sharing host-machine data inside a container.

Docker events

Restricting container network access

Example docker networks that blocks traffic. Use as the network when starting the container with --net or docker network connect.

run consul in docker 1.12 swarm

Dockerfile contents ordering

  1. Base image declaration (FROM)
  2. Metadata (e.g. MAINTAINER, LABEL)
  3. Installing system dependencies (e.g. apt-get install, apk add)
  4. Copying app dependencies file (e.g. bower.json, package.json, build.gradle, requirements.txt)
  5. Installing app dependencies (e.g. npm install, pip install)
  6. Copying entire code base
  7. Setting up default runtime configs (e.g. CMD, ENTRYPOINT, ENV, EXPOSE)

These orderings are made for optimizing build time using Docker's built-in caching mechanism.

Rule of thumbs:

Parts that change often (e.g. codebase) should be placed near bottom of Dockerfile, and vice-versa. Parts that rarely change (e.g. dependencies) should be placed at top.

docker inspect getting various fields for key:value and elements of list

passing secret data to a running container

Connecting Containers

Logging

Creating a service with persistence

Persistence is created in docker containers using volumes. Docker have many ways to deal with volumes. Named volumes are very convenient by:

  • They persist even when the container is removed using the -v option.
  • The only way to delete a named volume is doing an explicit call to docker volume rm
  • The named volumes can be shared among container without linking or --volumes-from option.
  • They don't have permission issues that host mounted volumes have.
  • They can be manipulated using docker volume command.

Docker in Docker

security

How to debug when docker build fails

Docker private/secure registry with API v2

Running Simple Node.js Application

Running services

Iptables with Docker

The problem

Configuring iptables rules for Docker containers is a bit tricky. At first, you would think that "classic" firewall rules should do the trick.

For example, let's assume that you have configured a nginx-proxy container + several service containers to expose via HTTPS some personal web services. Then a rule like this should give access to your web services only for IP XXX.XXX.XXX.XXX.

$ iptables -A INPUT -i eth0 -p tcp -s XXX.XXX.XXX.XXX -j ACCEPT
$ iptables -P INPUT DROP

It won't work, your containers are still accessible for everyone.

Indeed, Docker containers are not host services. They rely on a virtual network in your host, and the host acts as a gateway for this network. And concerning gateways, routed traffic is not handled by the INPUT table, but by the FORWARD table, which makes the rule posted before uneffective.

But it's not all. In fact, Docker daemon creates a lot of iptables rules when it starts to do its magic concerning containers network connectivity. In particular, a DOCKER table is created to handle rules concerning containers by forwarding traffic from the FORWARD table to this new table.

$ iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy DROP)
target     prot opt source               destination
DOCKER-ISOLATION  all  --  anywhere             anywhere
DOCKER     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
DOCKER     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain DOCKER (2 references)
target     prot opt source               destination
ACCEPT     tcp  --  anywhere             172.18.0.4           tcp dpt:https
ACCEPT     tcp  --  anywhere             172.18.0.4           tcp dpt:http

Chain DOCKER-ISOLATION (1 references)
target     prot opt source               destination
DROP       all  --  anywhere             anywhere
DROP       all  --  anywhere             anywhere
RETURN     all  --  anywhere             anywhere

The solution

If you check the official documentation (https://docs.docker.com/v1.5/articles/networking/), a first solution is given to limit Docker container access to one particular IP.

$ iptables -I DOCKER -i ext_if ! -s 8.8.8.8 -j DROP

Indeed, adding a rule at the top of the DOCKER table is a good idea. It does not interfere with the rules automatically configured by Docker, and it is simple. But two major lacks :

  • First, what if you need to access from two IP instead of one ? Here only one src IP can be accepted, other will be dropped without any way to prevent that.
  • Second, what if your docker need access to Internet ? Pratically no request will succeed, as only the server 8.8.8.8 could respond to them.
  • Finally, what if you want to add other logics ? For example, give access to any user to your webserver serving on HTTP protocol, but limit everything else to particular IP.

For the first observation, we can use ipset. Instead of allowing one IP in the rule above, we allow all IPs from the predefined ipset. As a bonus, the ipset can be updated without the necessity to redefine the iptable rule.

$ iptables -I DOCKER -i ext_if -m set ! --match-set my-ipset src -j DROP

For the second observation, this is a canonical problem for firewalls : if you are allowed to contact a server through a firewall, then the firewall should authorize the server to respond to your request. This can be done by authorizing packets which are related to an established connection. For the docker logic, it gives :

$ iptables -I DOCKER -i ext_if -m state --state ESTABLISHED,RELATED -j ACCEPT

The last observation focuses on one point : iptables rules is essential. Indeed, additional logic to ACCEPT some connections (including the one concerning ESTABLISHED connections) must be put at the top of the DOCKER table, before the DROP rule which deny all remaining connections not matching the ipset.

As we use the -I option of iptable, which inserts rules at the top of the table, previous iptables rules must be inserted by reverse order :

// Drop rule for non matching IPs
$ iptables -I DOCKER -i ext_if -m set ! --match-set my-ipset src -j DROP
// Then Accept rules for established connections
$ iptables -I DOCKER -i ext_if -m state --state ESTABLISHED,RELATED -j ACCEPT 
$ iptables -I DOCKER -i ext_if ... ACCEPT // Then 3rd custom accept rule
$ iptables -I DOCKER -i ext_if ... ACCEPT // Then 2nd custom accept rule
$ iptables -I DOCKER -i ext_if ... ACCEPT // Then 1st custom accept rule

With all of this in mind, you can now check the examples which illustrate this configuration.

Docker --net modes (bridge, hots, mapped container and none).

How to Setup Three Node Mongo Replica using Docker Image and Provisioned using Chef