Docker is an open-source project that automates the deployment of applications inside software containers. These application containers are similar to lightweight virtual machines, as they can be run in isolation to each other and the running host.
Docker requires features present in recent linux kernels to function properly, therefore on Mac OSX and Windows host a virtual machine running linux is required for docker to operate properly. Currently the main method of installing and setting up this virtual machine is via Docker Toolbox that is using VirtualBox internally, but there are plans to integrate this functionality into docker itself, using the native virtualisation features of the operating system. On Linux systems docker run natively on the host itself.
<container>
or container id
or <CONTAINER_NAME>
. In all these places you can either pass a container name or container id to specify a container.Swarm mode implements the following features:
For more official Docker documentation regarding Swarm visit: Swarm mode overview
Click on commands description for documentation
docker swarm init [OPTIONS]
Join a swarm as a node and/or manager
docker swarm join [OPTIONS] HOST:PORT
docker service create [OPTIONS] IMAGE [COMMAND] [ARG...]
Display detailed information on one or more services
docker service inspect [OPTIONS] SERVICE [SERVICE...]
docker service ls [OPTIONS]
docker service rm SERVICE [SERVICE...]
Scale one or multiple replicated services
docker service scale SERVICE=REPLICAS [SERVICE=REPLICAS...]
List the tasks of one or more services
docker service ps [OPTIONS] SERVICE [SERVICE...]
docker service update [OPTIONS] SERVICE
docker-machine
manages remote hosts running Docker.
The docker-machine
command line tool manages the full machine's life cycle using provider specific drivers. It can be used to select an "active" machine. Once selected, an active machine can be used as if it was the local Docker Engine.
Dockerfiles are of the form:
# This is a comment
INSTRUCTION arguments
#
FROM
to specify the base imageWhen building a Dockerfile, the Docker client will send a "build context" to the Docker daemon. The build context includes all files and folder in the same directory as the Dockerfile. COPY
and ADD
operations can only use files from this context.
Some Docker file may start with:
# escape=`
This is used to instruct the Docker parser to use `
as an escape character instead of \
. This is mostly useful for Windows Docker files.
Usually each container should hosts one process. In case you need multiple processes in one container (e.g. an SSH server to login to your running container instance) you could get the idea to write you own shell script that starts those processes. In that case you had to take care about the SIGNAL
handling yourself (e.g. redirecting a caught SIGINT
to the child processes of your script). That's not really what you want. A simple solution is to use supervisord
as the containers root process which takes care about SIGNAL
handling and its child processes lifetime.
But keep in mind, that this ist not the "docker way". To achive this example in the docker way you would log into the docker host
(the machine the container runs on) and run docker exec -it container_name /bin/bahs
. This command opens you a shell inside the container as ssh would do.
People new to Docker often don't realize that Docker filesystems are temporary by default. If you start up a Docker image you'll get a container that on the surface behaves much like a virtual machine. You can create, modify, and delete files. However, unlike a virtual machine, if you stop the container and start it up again, all your changes will be lost -- any files you previously deleted will now be back, and any new files or edits you made won't be present.
Volumes in docker containers allow for persistent data, and for sharing host-machine data inside a container.
Example docker networks that blocks traffic. Use as the network when starting the container with --net
or docker network connect
.
FROM
)MAINTAINER
, LABEL
)apt-get install
, apk add
)bower.json
, package.json
, build.gradle
, requirements.txt
)npm install
, pip install
)CMD
, ENTRYPOINT
, ENV
, EXPOSE
)These orderings are made for optimizing build time using Docker's built-in caching mechanism.
Rule of thumbs:
Parts that change often (e.g. codebase) should be placed near bottom of Dockerfile, and vice-versa. Parts that rarely change (e.g. dependencies) should be placed at top.
The host
and bridge
network drivers are able to connect containers on a single docker host. To allow containers to communicate beyond one machine, create an overlay network. Steps to create the network depend on how your docker hosts are managed.
docker network create --driver overlay
Persistence is created in docker containers using volumes. Docker have many ways to deal with volumes. Named volumes are very convenient by:
Configuring iptables rules for Docker containers is a bit tricky. At first, you would think that "classic" firewall rules should do the trick.
For example, let's assume that you have configured a nginx-proxy container + several service containers to expose via HTTPS some personal web services. Then a rule like this should give access to your web services only for IP XXX.XXX.XXX.XXX.
$ iptables -A INPUT -i eth0 -p tcp -s XXX.XXX.XXX.XXX -j ACCEPT
$ iptables -P INPUT DROP
It won't work, your containers are still accessible for everyone.
Indeed, Docker containers are not host services. They rely on a virtual network in your host, and the host acts as a gateway for this network. And concerning gateways, routed traffic is not handled by the INPUT table, but by the FORWARD table, which makes the rule posted before uneffective.
But it's not all. In fact, Docker daemon creates a lot of iptables rules when it starts to do its magic concerning containers network connectivity. In particular, a DOCKER table is created to handle rules concerning containers by forwarding traffic from the FORWARD table to this new table.
$ iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-ISOLATION all -- anywhere anywhere
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.18.0.4 tcp dpt:https
ACCEPT tcp -- anywhere 172.18.0.4 tcp dpt:http
Chain DOCKER-ISOLATION (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
If you check the official documentation (https://docs.docker.com/v1.5/articles/networking/), a first solution is given to limit Docker container access to one particular IP.
$ iptables -I DOCKER -i ext_if ! -s 8.8.8.8 -j DROP
Indeed, adding a rule at the top of the DOCKER table is a good idea. It does not interfere with the rules automatically configured by Docker, and it is simple. But two major lacks :
For the first observation, we can use ipset. Instead of allowing one IP in the rule above, we allow all IPs from the predefined ipset. As a bonus, the ipset can be updated without the necessity to redefine the iptable rule.
$ iptables -I DOCKER -i ext_if -m set ! --match-set my-ipset src -j DROP
For the second observation, this is a canonical problem for firewalls : if you are allowed to contact a server through a firewall, then the firewall should authorize the server to respond to your request. This can be done by authorizing packets which are related to an established connection. For the docker logic, it gives :
$ iptables -I DOCKER -i ext_if -m state --state ESTABLISHED,RELATED -j ACCEPT
The last observation focuses on one point : iptables rules is essential. Indeed, additional logic to ACCEPT some connections (including the one concerning ESTABLISHED connections) must be put at the top of the DOCKER table, before the DROP rule which deny all remaining connections not matching the ipset.
As we use the -I option of iptable, which inserts rules at the top of the table, previous iptables rules must be inserted by reverse order :
// Drop rule for non matching IPs
$ iptables -I DOCKER -i ext_if -m set ! --match-set my-ipset src -j DROP
// Then Accept rules for established connections
$ iptables -I DOCKER -i ext_if -m state --state ESTABLISHED,RELATED -j ACCEPT
$ iptables -I DOCKER -i ext_if ... ACCEPT // Then 3rd custom accept rule
$ iptables -I DOCKER -i ext_if ... ACCEPT // Then 2nd custom accept rule
$ iptables -I DOCKER -i ext_if ... ACCEPT // Then 1st custom accept rule
With all of this in mind, you can now check the examples which illustrate this configuration.