Kubernetes is part of an open-source project that helps to manage a cluster of containers as a single system. It’s also a critical tool that manages and runs Docker containers on multiple hosts while offering location of these containers.
For a decade now, Google has benefited from the container technology. With Kubernetes, it’s easier for Google to share their expertise in containerized application as well as create an open platform to run different containers in large scale.
In this article, we will take you through different steps to help in building, deploying, and managing Docker in Kubernetes.
Requirements
To complete this process, you will need:
- An active Docker account to store the image
- A Kubernetes cluster
- Web server or application, in our case “todo list” application that uses a database from MongoDB.
- Git application fully installed in your machine.
How Does Kubernetes Work
Kubernetes instances are known as clusters (i.e., master servers and multiple minions). The master manages the minions where the command tools connect to the API endpoint. Docker host get the signal from the master servers to run the containers.
The cluster in Kubernetes contain several units:
- Master: Connects all the components on the node to work as one.
- Pods: A collection of units that run one or more containers.
- Minion: Part of the multiple Docker hosts that receives instruction from the master to manage and run containers.
- Node components are linked to specific pods and can be either physical or virtual machine.
Step 1: Creating an Image using Dockers
The first step is to containerize the web server or application by creating an image for Dockers. Visit your homepage, then using Git, change the tutorial web application from the repository page on GitHub.
$ cd ~ $ git clone https://github.com/janakiramm/todo-app.git
Start creating the image from your Docker file. With the help of t-switch, include the username, name of the image, and a tag (optional).
$docker build -t sammy/todo
If the image is successfully create, you should see an output like this:
Sending build context to Docker daemon 8.238MB Step 1/7 : FROM node:slim ---> 286b1e0e7d3f Step 2/7 : LABEL maintainer = "jani@janakiram.com" ---> Using cache ---> ab0e049cf6f8 Step 3/7 : RUN mkdir -p /usr/src/app ---> Using cache ---> 897176832f4d Step 4/7 : WORKDIR /usr/src/app ---> Using cache ---> 3670f0147bed Step 5/7 : COPY ./app/ ./ ---> Using cache ---> e28c7c1be1a0 Step 6/7 : RUN npm install ---> Using cache ---> 7ce5b1d0aa65 Step 7/7 : CMD node app.js ---> Using cache ---> 2cef2238de24 Successfully built 2cef2238de24 Successfully tagged sammy/todo-app:latest
Make sure the image has been created by typing the command below:
$ docker images
On the Docker Hub, save the image to the public directory by login into your Hub account on Dockers:
$ docker login
Enter your credentials, in this case, use a unique username to label your image on Dockerfile.
$ docker tag your_docker_hub_username/todo-app
Push the image to Docker Hub:
$ docker push
Once you have the Docker Image in the registry, you can proceed to set up the Kubernetes application.
Step 2: Deploying Pod (MongoDB) in Kubernetes
This application used a database from MongoDB to ensure secure storage of to-do lists that are created in the web server/application. To successful deploy MongoDB in Kubernetes, you need to run it as a Pod.
Start by creating a YAML file commonly known as db-pod.yaml:
$ nano db-pod.yaml
Include the standard port that defines your Pod on MongoDB. This should include the name and app. These are used to configure Pods.
db-pod.yaml apiVersion: v1 kind: Pod metadata: name: db labels: name: mongo app: todoapp spec: containers: - image: mongo name: mongo ports: - name: mongo containerPort: 27017 volumeMounts: - name: mongo-storage mountPath: /data/db volumes: - name: mongo-storage hostPath: path: /data/db
Next, run the command below to create your Pod
$ kubectl create -f db-pod.yml
Make sure you get the output below:
$ pod "db" created
Verify the Pod is running with the command below:
$ kubectl get pods
The figure below shows that the Pod is running successfully:
NAME READY STATUS RESTARTS AGE db 1/1 Running 0 2m
To make the Pod available to customers within the clusters, create a file known as db-service.yaml that features the code that represent MongoDB.
db-service.yaml apiVersion: v1 kind: Service metadata: name: db labels: name: mongo app: todoapp spec: selector: name: mongo type: ClusterIP ports: - name: db port: 27017 targetPort: 27017
Save your file and use the command kubectl to submit the result the system’s cluster:
$ kubectl create -f db-service.yml
You should have something like this as your output:
$ service "db" created
To find out which port shows the exact Pod you’re using, run the command below:
$ kubectl get services
The output should look like this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE db ClusterIP 10.109.114.243 <none> 27017/TCP 14s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 47m
In this sample, you will notice that the Service Pod you created appears on port 27017, which is the Standard port used by MongoDB. So, the web application can easily reach this service with the above results. This process makes it easier for Pods to identify and connect with each other.
Step 3: Scaling the Web Application
Scaling the web services with Replica sets is a process done to ensure the amount of Pods are running together throughout. Packing the Pod as a Replica set allows Kubernetes to run the least amount of Pods provided.
We need to delete the existing Pod and create a new set for the Replica Set.
To delete the current Pod, run the command below:
$ kubectl delete pod web
The output should be as follows:
pod "web" deleted
Proceed to create a new set of Replica.
Start by creating the file web-rs.yaml then add the following code:
apiVersion: extensions/v1beta1 kind: ReplicaSet metadata: name: web labels: name: web app: todoapp spec: replicas: 2 template: metadata: labels: name: web spec: containers: - name: web image: sammy/todo-app ports: - containerPort: 3000
Now save and exit.
Go ahead and create your Replica Set as shown below:
$ kubectl create -f web-rs.yaml
The output should look like this:
replicaset "web" created
Check to verify the exact number of Pods created:
$ kubectl get pods NAME READY STATUS RESTARTS AGE db 1/1 Running 0 18m web-n5l5h 1/1 Running 0 25s web-wh6nf 1/1 Running 0 25s
Test to see if the Replica Set is functioning. To do so, delete one of your Pods and observe what happens:
$ kubectl delete pod web-wh6nf
The output should contain the text below:
pod "web-wh6nf" deleted
Check the Pods once more to confirm they are functioning:
kubectl get pods
Output
NAME READY STATUS RESTARTS AGE db 1/1 Running 0 19m web-n5l5h 1/1 Running 0 1m web-wh6nf 1/1 Terminating 0 1m web-ws59m 0/1 ContainerCreating 0 2s
When one Pod is removed, Kubernetes creates another set to replace it and continue with the process.
To scale your Replica Set to more web Pods, for instance to 7, run the command below
$ kubectl scale rs/web --replicas=7
Output
replicaset "web" scaled
Check the number of Pods
$ kubectl get pods
The output should be as follows:
Output NAME READY STATUS RESTARTS AGE db 1/1 Running 0 22m web-4nh4g 1/1 Running 0 21s web-7vbb5 1/1 Running 0 21s web-8zd55 1/1 Running 0 21s web-f8hvq 0/1 ContainerCreating 0 21s web-ffrt6 1/1 Running 0 21s web-k6zv7 0/1 ContainerCreating 0 21s web-n5l5h 1/1 Running 0 3m web-qmdxn 1/1 Running 0 21s web-vc45m 1/1 Running 0 21s web-ws59m 1/1 Running 0 2m
Now, Kubernetes has started the scaling of the web application’s Pod. Once the service is running, it’s directed to a specific Pods in your Replica Set.
When the load reduces, you can reset the service to the initial state of two Pods using the command below:
kubectl scale rs/web --replicas=2
Output
replicaset "web" scaled
To terminate the rest of the Pods apart from the two, run the command below:
$ kubectl get pods
You should have the output below:
Output
NAME READY STATUS RESTARTS AGE db 1/1 Running 0 24m web-4nh4g 1/1 Terminating 0 2m web-7vbb5 1/1 Terminating 0 2m web-8zd55 1/1 Terminating 0 2m web-f8hvq 1/1 Terminating 0 2m web-ffrt6 1/1 Terminating 0 2m web-k6zv7 1/1 Terminating 0 2m web-n5l5h 1/1 Running 0 5m web-qmdxn 1/1 Terminating 0 2m web-vc45m 1/1 Terminating 0 2m web-ws59m 1/1 Running 0 4m
Verify that the Replica Set is available by trying to delete one of your Pods in the service:
$ kubectl delete pod web-ws59m
The output should be as follows:
Output pod "web-ws59m" deleted $ kubectl get pods
Output
NAME READY STATUS RESTARTS AGE db 1/1 Running 0 25m web-n5l5h 1/1 Running 0 7m web-ws59m 1/1 Terminating 0 5m web-z6r2g 0/1 ContainerCreating 0 5s
When there is a change in Pod count, Kubernetes adjusts to match the specific number of Pods in the YAML file.
To delete one or all the items in this guide, run the command below:
$ kubectl delete -f db-pod.yaml -f db-service.yaml -f web-rs.yaml -f web-service.yaml
The output should be as follows:
Output pod "db" deleted service "db" deleted replicaset "web" deleted service "web" deleted
Conclusion
If you follow this tutorial step by step, you should be able to deploy and scale Docker with Kubernetes. We hope this guide was helpful in achieving this.
If you want to read more about Kubernetes, read Hostadvice’s guide on Kubernetes Hosting.
Check out the top 3 Cloud hosting services:
- Want to get top recommendations about best hosting? Just click this link!