Docker Swarm, also known as Docker engine in swarm mode is a new clustering and orchestration tool for Docker containers used to manage a group of Docker hosts.
In this article, we present Docker in the Docker 1.12. It allows for addition and subtraction of container in the computing process.
There are two main components of Docker Swarm:
Manager Node: Deals with management of cluster task such as scheduling services, maintaining the state of clusters, and serving docker swarm mode endpoints.
Worker Node: Used to execute the cluster container.
In this tutorial, we will get into details on installing and configuring Docker Swarm Mode on CentOS 7. For this article, we will use three CentOS 7 to install and launch docker engine. Out of these, two servers will acts as Worker node or Docker Engine and the remaining one will be a manager
Pre-requisites
In this case, we will need the following:
- A local machine installed with Docker. The machine can be running on Windows, Linux, or macOS.
- Three servers with CentOS 7 fully installed. One server will be the Manager node while the other two servers will be the Worker node.
- We will use the following IP address: 172.168.0.101 for Manager Node, 172.168.0.102 for Worker node1 and 172.168.0.103 for Worker node2.
Check out these top 3 VPS services:
Sign in to https://ecs.console.aliyun.com/?spm=a3c0i.o25424en.a3.13.388d499ep38szx and choose CentOS 7. Once you’re logged in, run the command below to ensure the system is updated with the latest packages available:
yum update -y
Get Started
Before you begin the process, make sure you configure /etc/hosts file on every node so that it can be easier to communicate to each other using hostnames.
Use the command below to update the host file:
172.168.0.101dkmanager.example.com dkmanager
172.168.0.102 workernode1.example.com workernode1
172.168.0.103 workernode2.example.com workernode2
Save the file once you’re finished.
Now configure hostname of every node depending on the hosts file.
Run the command below for each node.
Manager node:
hostnamectl set-hostname managernode
Worker node1:
hostnamectl set-hostname workernode1
Worker node2:
hostnamectl set-hostname workernode2
Step 1: Installing Docker Engine
Now, install the Docker version on each node. Set the docker repository system and run the command below on all the hostnames.
Do the same for the two Worker node servers.
Step 2: Configuring Firewall on Each Node
The next step is to open the ports on the firewall to ensure the swarm cluster is working correctly.
Continue and run the command below on all nodes:
Restart the docker service:
Open the firewall ports below on each worker node then restart the docker service:
Step 3: Launch the Swarm or Cluster
Initialized the swarm on your Manager node. To do this, run the command below.
docker swarm init --advertise-addr 192.168.0.101
Make sure you see the output below:
The token that is generated by the output above helps to join manager node and worker nodes
Verify the manager status using the command below:
docker info
The output should look like this:
Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 17.12.0-ce Storage Driver: devicemapper Pool Name: docker-253:0-618740-pool Pool Blocksize: 65.54kB Base Device Size: 10.74GB Backing Filesystem: xfs Udev Sync Supported: true Data file: /dev/loop0 Metadata file: /dev/loop1 Data loop file: /var/lib/docker/devicemapper/devicemapper/data Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata Data Space Used: 11.8MB Data Space Total: 107.4GB Data Space Available: 3.817GB Metadata Space Used: 581.6kB Metadata Space Total: 2.147GB Metadata Space Available: 2.147GB Thin Pool Minimum Free Space: 10.74GB Deferred Removal Enabled: true Deferred Deletion Enabled: true Deferred Deleted Device Count: 0 Library Version: 1.02.140-RHEL7 (2017-05-03) Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog Swarm: active NodeID: viwovkb0bk0kxlk98r78apopo Is Manager: true ClusterID: ttauawqrc8mmd0feluhcr1b0d Managers: 1 Nodes: 1 Orchestration: Task History Retention Limit: 5 Raft: Snapshot Interval: 10000 Number of Old Snapshots to Retain: 0 Heartbeat Tick: 1 Election Tick: 3 Dispatcher: Heartbeat Period: 5 seconds CA Configuration: Expiry Duration: 3 months Force Rotate: 0 Autolock Managers: false Root Rotation In Progress: false Node Address: 192.168.0.102 Manager Addresses: 192.168.0.102:2377 Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 89623f28b87a6004d4b785663257362d1658a729 runc version: b2567b37d7b75eb4cf325b77297b140ea686ce8f init version: 949e6fa Security Options: seccomp Profile: default Kernel Version: 3.10.0-693.11.1.el7.x86_64 Operating System: CentOS Linux 7 (Core) OSType: linux Architecture: x86_64 CPUs: 1 Total Memory: 1.102GiB Name: centOS-7 ID: DN4N:BHHJ:6DJ7:SZPG:FJJC:XP6T:23R4:CESK:E5PO:SJ6B:BOST:HZQ5 Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false
You should see the entire list of nodes present in your cluster using the command below:
docker node ls
The output should be like this:
Step 4: Add Worker nodes to the Manager node
Add the Worker nodes to the docker swarm service with the command below:
docker swarm join --token SWMTKN-1 3793hvb71g0a6ubkgq8zgk9w99hlusajtmj5aqr3n2wrhzzf8z-1s38lymnir13hhso1qxt5pqru 192.168.0.101:2377
The output should be:
To check the status of the nodes, run the command below:
docker node ls
If the process is successful, you should get the output shown below:
In case, you want to retrieve a lost join token, run the command below:
docker swarm join-token manager -q
By now, the docker swarm mode should be running successfully with two worker nodes.
Step 5: Set up the Service in Swarm mode
Now launch the service in Swarm Mode. In this case, we will launch a web service in Docker Swarm Mode using the three containers.
Run the command below from the Docker Manager only:
docker service create -p 80:80 --name webservice --replicas 3 httpd
The output should look like this:
To check the status of your service, run the command below:
docker service ls
Output will be:
With the above output, the containers are deployed successfully across the cluster nodes. Now, it’s easier to log in to the web page from any node using the following web browser addresses.
http:// 172.168. 0.101 http://172.168. 0.102 http://172.168. 0.103
Step 6: Testing Container Self-Healing
Docker Swarm Mode contain unique features like container self-healing. In case a container fails to function, the manager will ensure that the containers restarts automatically on the that particular node.
To test if the process is working, let’s remove a container from workernode2 and find out whether a new container is launched or not.
Run the command below to list the container ID on Workernode2:
docker ps
The output should be like this:
Now, run the command below to remove container 9b01b0a55cb7:
docker rm 9b01b0a55cb7 -f
Now check whether a new container is deployed from the Manager node:
docker service ps webservice
By now, you may realize that one container has failed and immediately, another container has been started on workernode2:
Step 7: Scaling up and down containers for the service
In the Docker Cluster, it’s possible to scale up and down containers. In this case, let’s try to scale up the containers to 5 for the service.
[root@dkmanager ~]# docker service scale webserver=5 webserver scaled to 5 [root@dkmanager ~]#
Check the status of the service again using the command below:
Now, let’s try to scale down container to 2 for the service:
[root@dkmanager ~]# docker service scale webserver=2 webserver scaled to 2 [root@dkmanager ~]#
Check to see the process is done with the command below:
Now, you should have a fully configured Docker Swarm cluster on CentOS7.
Conclusion
There you have it. That’s how easy it is to setup Docker Swarm with the help the new Swarm mode and Docker engine. It is important to note that after the setup is done, make sure you protect the servers by providing an additional layer of security. A security feature with firewall and monitoring capabilities is a good start.
Check out the top 3 Dedicated server hosting services:
- To know further about best VPS hosting, click here.