Basic Docker SWARM Cluster with Consul, Vagrant, Docker Toolbox and Virtual Box

Below will help run a swarm cluster locally using Vagrant.

This will create and setup 5 vagrant machines in a private network (10.0.7.0/24)

Consul Master: 10.0.7.10
Swarm Manager: 10.0.7.11
Swarm node 1: 10.0.7.12
Swarm node 2: 10.0.7.13
Swarm node 3: 10.0.7.14

The steps were tested using the following
docker toolbox 1.11.1
docker 1.11.1
vagrant 1.7.2
docker-machine 0.7.0
docker-compose 1.7.0
Kitematic 0.10.2
Boot2Docker ISO 1.11.1
VirtualBox 4.3.26


Batch File to Bootstrap

mkdir c:\sd

git clone https://github.com/deviantony/vagrant-swarm-cluster.git

cd vagrant-swarm-cluster

BATCH File .\startup-swarm.bat

vagrant up --provider virtualbox

docker -H 10.0.7.11:2375 run -d --restart always --name consul1 --net host consul agent -server -bind 10.0.7.11 -client 10.0.7.11 -

retry-join 10.0.7.11 -retry-join 10.0.7.12 -retry-join 10.0.7.13 -retry-join 10.0.7.14 -bootstrap-expect 3

docker -H 10.0.7.12:2375 run -d --restart always --name consul2 --net host consul agent -server -bind 10.0.7.12 -client 10.0.7.12 -

retry-join 10.0.7.11 -retry-join 10.0.7.12 -retry-join 10.0.7.13 -retry-join 10.0.7.14  -bootstrap-expect 3

docker -H 10.0.7.13:2375 run -d --restart always --name consul3 --net host consul agent -server -bind 10.0.7.13 -client 10.0.7.13 -

retry-join 10.0.7.11 -retry-join 10.0.7.12 -retry-join 10.0.7.13 -retry-join 10.0.7.14  -bootstrap-expect 3

docker -H 10.0.7.14:2375 run -d --restart always --name consul3 --net host consul agent -server -bind 10.0.7.14 -client 10.0.7.14 -

retry-join 10.0.7.11 -retry-join 10.0.7.12 -retry-join 10.0.7.13 -retry-join 10.0.7.14  -bootstrap-expect 3

echo "Starting Swarm manager..."
docker -H 10.0.7.11:2375 run -d --restart always -p 4000:4000 --name swarm_manager swarm:1.2.0 manage -H :4000 --replication --

advertise 10.0.7.11:2375 consul://10.0.7.11:8500

echo "Starting Node 1 on SWARM Cluster...
docker -H 10.0.7.12:2375 run -d --restart always --name swarm_node1 swarm:1.2.0 join --heartbeat 20s --ttl 30s --advertise

10.0.7.12:2375 consul://10.0.7.12:8500

echo "Starting Node 2 on SWARM Cluster...
docker -H 10.0.7.13:2375 run -d --restart always --name swarm_node2 swarm:1.2.0 join --heartbeat 20s --ttl 30s --advertise

10.0.7.13:2375 consul://10.0.7.13:8500

echo "Starting Node 3 on SWARM Cluster...
docker -H 10.0.7.14:2375 run -d --restart always --name swarm_node2 swarm:1.2.0 join --heartbeat 20s --ttl 30s --advertise

10.0.7.14:2375 consul://10.0.7.14:8500

echo Waiting for the nodes to join the cluster...
PING 1.1.1.1 -n 1 -w 30000 >NUL

echo Checking SWARM Cluster Status...
docker -H 10.0.7.11:4000 info

Let's test it again

vagrant destroy --force

startup-swarm.bat > .\output.log

type .\output.log
- 2 minutes 9 seconds to build

C:\SD\vagrant-swarm-cluster>vagrant up --provider virtualbox
Bringing machine 'swarm_manager' up with 'virtualbox' provider...
Bringing machine 'swarm_node1' up with 'virtualbox' provider...
Bringing machine 'swarm_node2' up with 'virtualbox' provider...
Bringing machine 'swarm_node3' up with 'virtualbox' provider...
==> swarm_manager: Importing base box 'deviantony/ubuntu-14.04-docker'...

[KProgress: 10%
[KProgress: 20%
[KProgress: 40%
[KProgress: 60%
[KProgress: 70%
[KProgress: 80%
[KProgress: 90%
[K==> swarm_manager: Matching MAC address for NAT networking...
==> swarm_manager: Checking if box 'deviantony/ubuntu-14.04-docker' is up to date...
==> swarm_manager: Setting the name of the VM: vagrant-swarm-cluster_swarm_manager_1478554253787_94795
==> swarm_manager: Clearing any previously set network interfaces...
==> swarm_manager: Preparing network interfaces based on configuration...
    swarm_manager: Adapter 1: nat
    swarm_manager: Adapter 2: hostonly
==> swarm_manager: Forwarding ports...
    swarm_manager: 22 => 2222 (adapter 1)
==> swarm_manager: Booting VM...
==> swarm_manager: Waiting for machine to boot. This may take a few minutes...
    swarm_manager: SSH address: 127.0.0.1:2222
    swarm_manager: SSH username: vagrant
    swarm_manager: SSH auth method: private key
    swarm_manager: Warning: Connection timeout. Retrying...
    swarm_manager:
    swarm_manager: Vagrant insecure key detected. Vagrant will automatically replace
    swarm_manager: this with a newly generated keypair for better security.
    swarm_manager:
    swarm_manager: Inserting generated public key within guest...
    swarm_manager: Removing insecure key from the guest if its present...
    swarm_manager: Key inserted! Disconnecting and reconnecting using new SSH key...
==> swarm_manager: Machine booted and ready!
==> swarm_manager: Checking for guest additions in VM...
    swarm_manager: The guest additions on this VM do not match the installed version of
    swarm_manager: VirtualBox! In most cases this is fine, but in rare cases it can
    swarm_manager: prevent things such as shared folders from working properly. If you see
    swarm_manager: shared folder errors, please make sure the guest additions within the
    swarm_manager: virtual machine match the version of VirtualBox you have installed on
    swarm_manager: your host and reload your VM.
    swarm_manager:
    swarm_manager: Guest Additions Version: 5.0.20 r106931
    swarm_manager: VirtualBox Version: 4.3
==> swarm_manager: Setting hostname...
==> swarm_manager: Configuring and enabling network interfaces...
==> swarm_manager: Running provisioner: shell...
    swarm_manager: Running: inline script
==> swarm_manager: stdin: is not a tty
==> swarm_manager: docker stop/waiting
==> swarm_manager: DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=consul://10.0.7.11:8500 --cluster-advertise=eth1:2375"
==> swarm_manager: docker start/running, process 1695
==> swarm_node1: Importing base box 'deviantony/ubuntu-14.04-docker'...

[KProgress: 10%
[KProgress: 20%
[KProgress: 40%
[KProgress: 60%
[KProgress: 70%
[KProgress: 80%
[KProgress: 90%
[K==> swarm_node1: Matching MAC address for NAT networking...
==> swarm_node1: Checking if box 'deviantony/ubuntu-14.04-docker' is up to date...
==> swarm_node1: Setting the name of the VM: vagrant-swarm-cluster_swarm_node1_1478554309235_88618
==> swarm_node1: Fixed port collision for 22 => 2222. Now on port 2200.
==> swarm_node1: Clearing any previously set network interfaces...
==> swarm_node1: Preparing network interfaces based on configuration...
    swarm_node1: Adapter 1: nat
    swarm_node1: Adapter 2: hostonly
==> swarm_node1: Forwarding ports...
    swarm_node1: 22 => 2200 (adapter 1)
==> swarm_node1: Booting VM...
==> swarm_node1: Waiting for machine to boot. This may take a few minutes...
    swarm_node1: SSH address: 127.0.0.1:2200
    swarm_node1: SSH username: vagrant
    swarm_node1: SSH auth method: private key
    swarm_node1: Warning: Connection timeout. Retrying...
    swarm_node1:
    swarm_node1: Vagrant insecure key detected. Vagrant will automatically replace
    swarm_node1: this with a newly generated keypair for better security.
    swarm_node1:
    swarm_node1: Inserting generated public key within guest...
    swarm_node1: Removing insecure key from the guest if its present...
    swarm_node1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> swarm_node1: Machine booted and ready!
==> swarm_node1: Checking for guest additions in VM...
    swarm_node1: The guest additions on this VM do not match the installed version of
    swarm_node1: VirtualBox! In most cases this is fine, but in rare cases it can
    swarm_node1: prevent things such as shared folders from working properly. If you see
    swarm_node1: shared folder errors, please make sure the guest additions within the
    swarm_node1: virtual machine match the version of VirtualBox you have installed on
    swarm_node1: your host and reload your VM.
    swarm_node1:
    swarm_node1: Guest Additions Version: 5.0.20 r106931
    swarm_node1: VirtualBox Version: 4.3
==> swarm_node1: Setting hostname...
==> swarm_node1: Configuring and enabling network interfaces...
==> swarm_node1: Running provisioner: shell...
    swarm_node1: Running: inline script
==> swarm_node1: stdin: is not a tty
==> swarm_node1: docker stop/waiting
==> swarm_node1: DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=consul://10.0.7.12:8500 --cluster-advertise=eth1:2375"
==> swarm_node1: docker start/running, process 1630
==> swarm_node2: Importing base box 'deviantony/ubuntu-14.04-docker'...

[KProgress: 10%
[KProgress: 20%
[KProgress: 40%
[KProgress: 50%
[KProgress: 60%
[KProgress: 70%
[KProgress: 80%
[KProgress: 90%
[K==> swarm_node2: Matching MAC address for NAT networking...
==> swarm_node2: Checking if box 'deviantony/ubuntu-14.04-docker' is up to date...
==> swarm_node2: Setting the name of the VM: vagrant-swarm-cluster_swarm_node2_1478554363466_83768
==> swarm_node2: Fixed port collision for 22 => 2222. Now on port 2201.
==> swarm_node2: Clearing any previously set network interfaces...
==> swarm_node2: Preparing network interfaces based on configuration...
    swarm_node2: Adapter 1: nat
    swarm_node2: Adapter 2: hostonly
==> swarm_node2: Forwarding ports...
    swarm_node2: 22 => 2201 (adapter 1)
==> swarm_node2: Booting VM...
==> swarm_node2: Waiting for machine to boot. This may take a few minutes...
    swarm_node2: SSH address: 127.0.0.1:2201
    swarm_node2: SSH username: vagrant
    swarm_node2: SSH auth method: private key
    swarm_node2: Warning: Connection timeout. Retrying...
    swarm_node2:
    swarm_node2: Vagrant insecure key detected. Vagrant will automatically replace
    swarm_node2: this with a newly generated keypair for better security.
    swarm_node2:
    swarm_node2: Inserting generated public key within guest...
    swarm_node2: Removing insecure key from the guest if its present...
    swarm_node2: Key inserted! Disconnecting and reconnecting using new SSH key...
==> swarm_node2: Machine booted and ready!
==> swarm_node2: Checking for guest additions in VM...
    swarm_node2: The guest additions on this VM do not match the installed version of
    swarm_node2: VirtualBox! In most cases this is fine, but in rare cases it can
    swarm_node2: prevent things such as shared folders from working properly. If you see
    swarm_node2: shared folder errors, please make sure the guest additions within the
    swarm_node2: virtual machine match the version of VirtualBox you have installed on
    swarm_node2: your host and reload your VM.
    swarm_node2:
    swarm_node2: Guest Additions Version: 5.0.20 r106931
    swarm_node2: VirtualBox Version: 4.3
==> swarm_node2: Setting hostname...
==> swarm_node2: Configuring and enabling network interfaces...
==> swarm_node2: Running provisioner: shell...
    swarm_node2: Running: inline script
==> swarm_node2: stdin: is not a tty
==> swarm_node2: docker stop/waiting
==> swarm_node2: DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=consul://10.0.7.13:8500 --cluster-advertise=eth1:2375"
==> swarm_node2: docker start/running, process 1630
==> swarm_node3: Importing base box 'deviantony/ubuntu-14.04-docker'...

[KProgress: 10%
[KProgress: 20%
[KProgress: 30%
[KProgress: 40%
[KProgress: 60%
[KProgress: 70%
[KProgress: 80%
[KProgress: 90%
[K==> swarm_node3: Matching MAC address for NAT networking...
==> swarm_node3: Checking if box 'deviantony/ubuntu-14.04-docker' is up to date...
==> swarm_node3: Setting the name of the VM: vagrant-swarm-cluster_swarm_node3_1478554417898_50470
==> swarm_node3: Fixed port collision for 22 => 2222. Now on port 2202.
==> swarm_node3: Clearing any previously set network interfaces...
==> swarm_node3: Preparing network interfaces based on configuration...
    swarm_node3: Adapter 1: nat
    swarm_node3: Adapter 2: hostonly
==> swarm_node3: Forwarding ports...
    swarm_node3: 22 => 2202 (adapter 1)
==> swarm_node3: Booting VM...
==> swarm_node3: Waiting for machine to boot. This may take a few minutes...
    swarm_node3: SSH address: 127.0.0.1:2202
    swarm_node3: SSH username: vagrant
    swarm_node3: SSH auth method: private key
    swarm_node3: Warning: Connection timeout. Retrying...
    swarm_node3:
    swarm_node3: Vagrant insecure key detected. Vagrant will automatically replace
    swarm_node3: this with a newly generated keypair for better security.
    swarm_node3:
    swarm_node3: Inserting generated public key within guest...
    swarm_node3: Removing insecure key from the guest if its present...
    swarm_node3: Key inserted! Disconnecting and reconnecting using new SSH key...
==> swarm_node3: Machine booted and ready!
==> swarm_node3: Checking for guest additions in VM...
    swarm_node3: The guest additions on this VM do not match the installed version of
    swarm_node3: VirtualBox! In most cases this is fine, but in rare cases it can
    swarm_node3: prevent things such as shared folders from working properly. If you see
    swarm_node3: shared folder errors, please make sure the guest additions within the
    swarm_node3: virtual machine match the version of VirtualBox you have installed on
    swarm_node3: your host and reload your VM.
    swarm_node3:
    swarm_node3: Guest Additions Version: 5.0.20 r106931
    swarm_node3: VirtualBox Version: 4.3
==> swarm_node3: Setting hostname...
==> swarm_node3: Configuring and enabling network interfaces...
==> swarm_node3: Running provisioner: shell...
    swarm_node3: Running: inline script
==> swarm_node3: stdin: is not a tty
==> swarm_node3: docker stop/waiting
==> swarm_node3: DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=consul://10.0.7.14:8500 --cluster-advertise=eth1:2375"
==> swarm_node3: docker start/running, process 1631

C:\SD\vagrant-swarm-cluster>set starttime=21:34:15.58

C:\SD\vagrant-swarm-cluster>docker -H 10.0.7.11:2375 run -d --restart always --name consul1 --net host consul agent -server -bind 10.0.7.11 -client 10.0.7.11 -retry-join 10.0.7.11 -retry-join 10.0.7.12 -retry-join 10.0.7.13 -retry-join 10.0.7.14 -bootstrap-expect 3
e364331ae768963843dcd2c9621f49664372e2cb9abe583653ad4eec4b8df205

C:\SD\vagrant-swarm-cluster>docker -H 10.0.7.12:2375 run -d --restart always --name consul2 --net host consul agent -server -bind 10.0.7.12 -client 10.0.7.12 -retry-join 10.0.7.11 -retry-join 10.0.7.12 -retry-join 10.0.7.13 -retry-join 10.0.7.14  -bootstrap-expect 3
5e1b696229d2c94d307177ba9ba75d1ce347c9c4f5975063331dabe45ecaab41

C:\SD\vagrant-swarm-cluster>docker -H 10.0.7.13:2375 run -d --restart always --name consul3 --net host consul agent -server -bind 10.0.7.13 -client 10.0.7.13 -retry-join 10.0.7.11 -retry-join 10.0.7.12 -retry-join 10.0.7.13 -retry-join 10.0.7.14  -bootstrap-expect 3
11bf0879baa6fb545c32345af8ac80272144f29e01859221200a44c12fe0442c

C:\SD\vagrant-swarm-cluster>docker -H 10.0.7.14:2375 run -d --restart always --name consul3 --net host consul agent -server -bind 10.0.7.14 -client 10.0.7.14 -retry-join 10.0.7.11 -retry-join 10.0.7.12 -retry-join 10.0.7.13 -retry-join 10.0.7.14  -bootstrap-expect 3
642a3414b4056e17f3ce322f781ab904fb415dc25a82462920b3afde0b68ac1c

C:\SD\vagrant-swarm-cluster>echo "Starting Swarm manager..."
"Starting Swarm manager..."

C:\SD\vagrant-swarm-cluster>docker -H 10.0.7.11:2375 run -d --restart always -p 4000:4000 --name swarm_manager swarm:1.2.0 manage -H :4000 --replication --advertise 10.0.7.11:2375 consul://10.0.7.11:8500
498118b42f42be74f9d0d69f68850f1a14ecaf2718b8763fdc113ac7bcbec643

C:\SD\vagrant-swarm-cluster>echo "Starting Node 1 on SWARM Cluster...
"Starting Node 1 on SWARM Cluster...

C:\SD\vagrant-swarm-cluster>docker -H 10.0.7.12:2375 run -d --restart always --name swarm_node1 swarm:1.2.0 join --heartbeat 20s --ttl 30s --advertise 10.0.7.12:2375 consul://10.0.7.12:8500
3e61b74128e1d37385468507a70f0b29225e9bedd49c29605d315de4652d5622

C:\SD\vagrant-swarm-cluster>echo "Starting Node 2 on SWARM Cluster...
"Starting Node 2 on SWARM Cluster...

C:\SD\vagrant-swarm-cluster>docker -H 10.0.7.13:2375 run -d --restart always --name swarm_node2 swarm:1.2.0 join --heartbeat 20s --ttl 30s --advertise 10.0.7.13:2375 consul://10.0.7.13:8500
054f37aaec62ba70375866da1fea2a6e6df9a5d6c4cb8a756b601acddbff1853

C:\SD\vagrant-swarm-cluster>echo "Starting Node 3 on SWARM Cluster...
"Starting Node 3 on SWARM Cluster...

C:\SD\vagrant-swarm-cluster>docker -H 10.0.7.14:2375 run -d --restart always --name swarm_node2 swarm:1.2.0 join --heartbeat 20s --ttl 30s --advertise 10.0.7.14:2375 consul://10.0.7.14:8500

C:\SD\vagrant-swarm-cluster>echo Waiting for the nodes to join the cluster...
Waiting for the nodes to join the cluster...

C:\SD\vagrant-swarm-cluster>PING 1.1.1.1 -n 1 -w 30000  1>NUL

C:\SD\vagrant-swarm-cluster>set endtime=21:36:24.36

C:\SD\vagrant-swarm-cluster>echo Checking SWARM Cluster Status...
Checking SWARM Cluster Status...

C:\SD\vagrant-swarm-cluster>docker -H 10.0.7.11:4000 info
Containers: 4
 Running: 4
 Paused: 0
 Stopped: 0
Images: 4
Server Version: swarm/1.2.0
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 2
 swarm-node1: 10.0.7.12:2375
  └ Status: Healthy
  └ Containers: 2
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 513.5 MiB
  └ Labels: executiondriver=, kernelversion=4.2.0-27-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-11-07T21:36:19Z
  └ ServerVersion: 1.11.1
 swarm-node2: 10.0.7.13:2375
  └ Status: Healthy
  └ Containers: 2
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 513.5 MiB
  └ Labels: executiondriver=, kernelversion=4.2.0-27-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-11-07T21:35:42Z
  └ ServerVersion: 1.11.1
Plugins:
 Volume:
 Network:
Kernel Version: 4.2.0-27-generic
Operating System: linux
Architecture: amd64
CPUs: 2
Total Memory: 1.003 GiB
Name: 498118b42f42
Docker Root Dir:
Debug mode (client): false
Debug mode (server): false

C:\SD\vagrant-swarm-cluster>echo Started Building SWARM Cluster: 21:34:15.58
Started Building SWARM Cluster: 21:34:15.58

C:\SD\vagrant-swarm-cluster>echo Ended Building SWARM Cluster: 21:36:24.36
Ended Building SWARM Cluster: 21:36:24.36

Comments

Popular posts from this blog

Basic Send Message to MQ with Java and IBM MQ JMS

Basic Receive Message to MQ with Java and IBM MQ JMS

Configure Database Connection using MyBatis