Docker: Docker Swarm with Docker Machine Quick Setup Guide

Harry Lee

August 23, 2016

Docker already has an official documentation with tutorials and examples on how to go about setting up your own Docker Swarm nodes and cluster. This post is intended for those who have a fundamental grasp on Docker and are too lazy to read the documents. This post includes minimal explanation and focuses primarily on the configuration. It is written to be a series of instructions rather than a full fledged post.

Introduction

This post outlines the set up of docker-machine to provision remote hosts (Generic and Microsoft Azure). Docker Swarm set up with and without docker-machine is also discussed.

Docker Machine (Azure)

Docker Machine (Generic)

Docker Swarm (Generic)

Docker Machine Swarm (Generic)

Docker Machine (Azure)

To use Docker Machine to provision hosts on Micorsoft Azure, first make sure that you have the correct subscription ID for your Azure account.

Execute:

$ docker-machine create --driver azure \
  --azure-subscription-id xxxx-xxxx-xxxx-xxxx \
  --azure-image canonical:UbuntuServer:16.04.0-LTS:latest \
  --azure-location eastus \
  --azure-resource-group DockerSwarm \
  --azure-size Basic_A0 \
  --azure-open-port 80 \
  dockertemp1

To get the latest osImage for the VM, use Azure CLI and execute:

$ azure vm image list-skus

This will return :

info:    Executing command vm image list-skus
Location: eastus
Publisher: canonical
Offer: ubuntuserver
+ Getting virtual machine image skus (Publisher:"canonical" Offer:"ubuntuserver" Location:"eastus")
data:    Publisher  Offer         sku                Location
data:    ---------  ------------  -----------------  --------
data:    canonical  ubuntuserver  12.04.2-LTS        eastus
data:    canonical  ubuntuserver  12.04.3-LTS        eastus
data:    canonical  ubuntuserver  12.04.4-LTS        eastus
data:    canonical  ubuntuserver  12.04.5-DAILY-LTS  eastus
data:    canonical  ubuntuserver  12.04.5-LTS        eastus
data:    canonical  ubuntuserver  12.10              eastus
data:    canonical  ubuntuserver  14.04-beta         eastus

With Azure, the network security group needs to be configured to open port 2376 to any internal port. The Azure driver should handle this for you.

Docker Machine (Generic)

If you already have a host that is running docker and you want to incorporate it into your docker-machine routine. This section is for you.

We need to establish a communication channel first. This part is a bit tricky since we are using SSH as the protocol for communication. You will need to make sure that the machines can talk to each other using SSH.

In some cases, you may have to disable login using a password option for the machines. You should try this when you get i/o timeout or similar errors regarding communication issues.

The machine that is doing the deploying will be referred to as the deployment machine and the machine that is going to be deployed is the host.

$ docker-machine create \
  --driver generic \
  --generic-ip-address=​<ip address>​ \
  --generic-ssh-key ~/.ssh/id_rsa \
  --generic-ssh-user dockeruser \
  --generic-ssh-port 22 ​\
  <machine name>

The docker-machine communicates with the host via port 2376 (default). Ensure that this port is open and not blocked by your firewall.

Docker Swarm (Generic)

Below describes the two methods to create a swarm (generic driver) without using Docker Machine.

Token Method Hosted Discovery Backend Method (Consul)

Token

To install Swarm on the swarm master:

$ docker run --rm swarm create

Take note of the last line which is a token (cluster ID). Copy to a secure location. It will look something like: 81a20406f6258ab0cad7ceb5768daeac.

Then install Swarm on the other nodes (hosts):

$ docker run -d \
  --restart=always swarm join \
  --addr=<ip address>:2376 \
  token://<token>

Do the same for every host.

To manage the swarm from the Swarm master:

$ docker run -d swarm manage token://<token>

To see if the nodes are connected in the swarm:

$ docker run --rm swarm list token://<token>

Hosted Discovery Backend

Install Consul

If using the token method is not desired, you can use your own discovery key store. Consul can be used as a key:value store which will act as our discovery backend service. Normally this should be on an independent node, so that downtime of your cluster does not bring down this service too.

Set up Consul:

    $ docker run -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap

Now we can use Consul to keep track of all the nodes in the swarm instead of the token.

Join Swarm

Now we need to add all nodes into the swarm. On each node/host, run:

$ docker run -d \
  --restart=always \
  --name swarm-agent swarm:1.0.0 join \
  --advertise <node ip>:2376 \
  consul://<consul ip>:8500

Swarm Master

Once all nodes are added to the swarm, we need a way to control and manage the nodes. Swarm master does exactly this. Run swarm master on an existing node or an independent node (recommended).

To configure the Swarm master:

$ docker run -d \
  --restart=always \
  --name swarm-agent-master \
  --tlsverify \
  --tlscacert=/etc/docker/ca.pem \
  --tlscert=/etc/docker/server.pem \
  --tlskey=/etc/docker/server-key.pem \
  --strategy spread \
  -p 3376:3376 \
  -v /etc/docker:/etc/docker swarm:1.0.0 manage \
  -H tcp://0.0.0.0:3376 \
  consul://<consul ip>:8500

To attach to the Swarm master:

$ docker -H tcp://<ip address>:3376 info

You can change the DOCKER_HOST environment variable if you don’t want to type -H <ip address>:<port> all the time.

$ export DOCKER_HOST=tcp://<ip address>:3376

Docker Machine Swarm (Generic)

This section outlines the creation of a swarm using Docker Machine. You should have a discovery backend configured before starting this section. See Hosted Discovery Backend on how to configure a Consul discovery backend.

To configure the Swarm Master:

docker-machine create \
--driver generic \
--generic-ip-address=​<ip address>​ \
--generic-ssh-key ~/.ssh/id_rsa \
--generic-ssh-user dockeruser \
--generic-ssh-port 22 ​\
--swarm --swarm-master --swarm-discovery=<consul> \
--engine-opt="cluster-store=<consul>" \
--engine-opt="cluster-advertise=<ip address>:2376"
<machine name>

To configure the Swarm Agent (simply remove --swarm-master):

docker-machine create \
--driver generic \
--generic-ip-address=​<ip address>​ \
--generic-ssh-key ~/.ssh/id_rsa \
--generic-ssh-user dockeruser \
--generic-ssh-port 22 ​\
--swarm --swarm-discovery=<consul> \
--engine-opt="cluster-store=<consul>" \
--engine-opt="cluster-advertise=<ip address>:2376"
<machine name>

When you run docker-machine ls, you should now see some values in the swarm column:

NAME           ACTIVE   DRIVER       STATE     URL                         SWARM              DOCKER    ERRORS
dockera        -        generic      Running   tcp://40.76.49.229:2376     dockera (master)   v1.11.1
dockerb        *        generic      Running   tcp://40.76.35.249:2376     dockera            v1.11.1
dockerconsul   -        generic      Running   tcp://104.41.139.22:2376                       v1.11.1

Network

Since we have the nodes setup in a swarm, we need to configure Docker Multi-Host Network. This is relatively easy if you have the swarm setup using Docker Machine.

  1. Set your docker machine to point to swarm master:

    $ eval $(docker-machine env --swarm dockera)
    
  2. Check if you are in the swarm environment by executing docker info, you should see all the connected nodes.

  3. Create your overlay network:

    $ docker network create \
    --driver overlay \
    --subnet 10.0.9.0/24 <network name>
    
    • always define your subnet, you do not want any conflicts with the existing network that the node is connected to.
  4. Check the network: docker network ls. Your newly created network should be listed.

  5. Switch to each swarm agent and run the same command to ensure that all agents are connected to the network.

Conclusion

Following the above set up yields a simple Docker Swarm cluster. There are easier ways to set up clusters such as using Docker Cloud and Amazon Container Services, but that is beside the point of this post. Setting up a Docker cluster this way allows us to learn by delving into the specifics and get our hands dirty with the internal workings of Docker!