Docker already has an official documentation with tutorials and examples on how to go about setting up your own Docker Swarm nodes and cluster. This post is intended for those who have a fundamental grasp on Docker and are too lazy to read the documents. This post includes minimal explanation and focuses primarily on the configuration. It is written to be a series of instructions rather than a full fledged post.
This post outlines the set up of
docker-machine to provision remote hosts (Generic and Microsoft Azure). Docker Swarm set up with and without
docker-machine is also discussed.
To use Docker Machine to provision hosts on Micorsoft Azure, first make sure that you have the correct subscription ID for your Azure account.
$ docker-machine create --driver azure \ --azure-subscription-id xxxx-xxxx-xxxx-xxxx \ --azure-image canonical:UbuntuServer:16.04.0-LTS:latest \ --azure-location eastus \ --azure-resource-group DockerSwarm \ --azure-size Basic_A0 \ --azure-open-port 80 \ dockertemp1
To get the latest osImage for the VM, use Azure CLI and execute:
$ azure vm image list-skus
This will return :
info: Executing command vm image list-skus Location: eastus Publisher: canonical Offer: ubuntuserver + Getting virtual machine image skus (Publisher:"canonical" Offer:"ubuntuserver" Location:"eastus") data: Publisher Offer sku Location data: --------- ------------ ----------------- -------- data: canonical ubuntuserver 12.04.2-LTS eastus data: canonical ubuntuserver 12.04.3-LTS eastus data: canonical ubuntuserver 12.04.4-LTS eastus data: canonical ubuntuserver 12.04.5-DAILY-LTS eastus data: canonical ubuntuserver 12.04.5-LTS eastus data: canonical ubuntuserver 12.10 eastus data: canonical ubuntuserver 14.04-beta eastus
With Azure, the network security group needs to be configured to open port 2376 to any internal port. The Azure driver should handle this for you.
If you already have a host that is running docker and you want to incorporate it into your docker-machine routine. This section is for you.
We need to establish a communication channel first. This part is a bit tricky since we are using SSH as the protocol for communication. You will need to make sure that the machines can talk to each other using SSH.
In some cases, you may have to disable login using a password option for the machines. You should try this when you get i/o timeout or similar errors regarding communication issues.
The machine that is doing the deploying will be referred to as the deployment machine and the machine that is going to be deployed is the host.
$ docker-machine create \ --driver generic \ --generic-ip-address=<ip address> \ --generic-ssh-key ~/.ssh/id_rsa \ --generic-ssh-user dockeruser \ --generic-ssh-port 22 \ <machine name>
- ip address is the public ip address of the host.
- machine name is the name that you give to your docker machine.
- the ssh public key used is the one in the default location:
The docker-machine communicates with the host via port 2376 (default). Ensure that this port is open and not blocked by your firewall.
Below describes the two methods to create a swarm (generic driver) without using Docker Machine.
To install Swarm on the swarm master:
$ docker run --rm swarm create
Take note of the last line which is a token (cluster ID). Copy to a secure location. It will look something like:
Then install Swarm on the other nodes (hosts):
$ docker run -d \ --restart=always swarm join \ --addr=<ip address>:2376 \ token://<token>
- ip address is the IP address of THIS node.
- token is the token that was generated from the previous step on the swarm master.
Do the same for every host.
To manage the swarm from the Swarm master:
$ docker run -d swarm manage token://<token>
To see if the nodes are connected in the swarm:
$ docker run --rm swarm list token://<token>
If using the token method is not desired, you can use your own discovery key store. Consul can be used as a key:value store which will act as our discovery backend service. Normally this should be on an independent node, so that downtime of your cluster does not bring down this service too.
Set up Consul:
$ docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap
- This will start a Consul container that exposes port 8500 for the service.
Now we can use Consul to keep track of all the nodes in the swarm instead of the token.
Now we need to add all nodes into the swarm. On each node/host, run:
$ docker run -d \ --restart=always \ --name swarm-agent swarm:1.0.0 join \ --advertise <node ip>:2376 \ consul://<consul ip>:8500
- node ip is the IP address of the host/node.
- consul ip is the IP address of the node that is running Consul.
Once all nodes are added to the swarm, we need a way to control and manage the nodes. Swarm master does exactly this. Run swarm master on an existing node or an independent node (recommended).
To configure the Swarm master:
$ docker run -d \ --restart=always \ --name swarm-agent-master \ --tlsverify \ --tlscacert=/etc/docker/ca.pem \ --tlscert=/etc/docker/server.pem \ --tlskey=/etc/docker/server-key.pem \ --strategy spread \ -p 3376:3376 \ -v /etc/docker:/etc/docker swarm:1.0.0 manage \ -H tcp://0.0.0.0:3376 \ consul://<consul ip>:8500
- where the Swarm master is exposed on port 3376
--tlsverifyuses TLS, encrypted communication
--tlscacert --tlscert --tlskeyensure that these SSL certificates exist in the directory
-H tcp://0.0.0.0:3376means that you can access the swarm master on the same node that it has been started.
- consul ip is the IP address of the node that is running Consul.
To attach to the Swarm master:
$ docker -H tcp://<ip address>:3376 info
You can change the
DOCKER_HOST environment variable if you don’t want to type
-H <ip address>:<port> all the time.
$ export DOCKER_HOST=tcp://<ip address>:3376
This section outlines the creation of a swarm using Docker Machine. You should have a discovery backend configured before starting this section. See Hosted Discovery Backend on how to configure a Consul discovery backend.
To configure the Swarm Master:
docker-machine create \ --driver generic \ --generic-ip-address=<ip address> \ --generic-ssh-key ~/.ssh/id_rsa \ --generic-ssh-user dockeruser \ --generic-ssh-port 22 \ --swarm --swarm-master --swarm-discovery=<consul> \ --engine-opt="cluster-store=<consul>" \ --engine-opt="cluster-advertise=<ip address>:2376" <machine name>
- consul can be
"$(docker-machine ip <consul name>):<consul port>"
- consul name is the docker name that you have given your consul node
- consul port is the consul service port: 8500
To configure the Swarm Agent (simply remove
docker-machine create \ --driver generic \ --generic-ip-address=<ip address> \ --generic-ssh-key ~/.ssh/id_rsa \ --generic-ssh-user dockeruser \ --generic-ssh-port 22 \ --swarm --swarm-discovery=<consul> \ --engine-opt="cluster-store=<consul>" \ --engine-opt="cluster-advertise=<ip address>:2376" <machine name>
When you run
docker-machine ls, you should now see some values in the
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS dockera - generic Running tcp://220.127.116.11:2376 dockera (master) v1.11.1 dockerb * generic Running tcp://18.104.22.168:2376 dockera v1.11.1 dockerconsul - generic Running tcp://22.214.171.124:2376 v1.11.1
Since we have the nodes setup in a swarm, we need to configure Docker Multi-Host Network. This is relatively easy if you have the swarm setup using Docker Machine.
Set your docker machine to point to swarm master:
$ eval $(docker-machine env --swarm dockera)
Check if you are in the swarm environment by executing
docker info, you should see all the connected nodes.
$ docker network create \ --driver overlay \ --subnet 10.0.9.0/24 <network name>
- always define your
subnet, you do not want any conflicts with the existing network that the node is connected to.
Check the network:
docker network ls. Your newly created network should be listed.
Switch to each swarm agent and run the same command to ensure that all agents are connected to the network.
Following the above set up yields a simple Docker Swarm cluster. There are easier ways to set up clusters such as using Docker Cloud and Amazon Container Services, but that is beside the point of this post. Setting up a Docker cluster this way allows us to learn by delving into the specifics and get our hands dirty with the internal workings of Docker!