Deployment with Clusters
Deployment of an application in a container over a number of machines with replicas to scale the applications as well as recover from failures is the principle concept of cluster deployment.Such kind of deployment provision requires an automation infrastrucure to create, manage, maintain the clusters. Prominent cluster deployment and tools we will be looking at are,
- Docker Swarm
- Kubernetes
1. Docker Swarm
A swarm is a group of machines running docker tied togther to form a cluster managed by a swarm manager. Once a cluster is created, all the machines tied together are referred to as nodes/workers. Only the swarm manager machine has authorization to execute commands or add more workers.
- Swarm Manager
- Worker or Node
Example of a Docker Swarm Deployment
Lets look at a demonstration of a cluster deployment using docker swarm
-- Aim:
To deploy the static webpage in a cluster with multiple instances using docker swarm
Cluster/Nodes: Group of virtual machines
Application: devOps wesite served as static web pages
Tool Used: Docker swarm
-- Components:
* Swarm Manager - Assuming the local machine <machine you are using> or one of the cirtual machine to be the manager
* devOps website as a container -
- Dockerfile - Using standalone nginx container and serving the static webpages from source
- DockerCompose - Using a combination of multiple containers of nginx <to host the website> + git-sync <to sync with the repository> + common container storage
- Virutal machines - either remote cloud machines or local setup of vms using virutalBox
- Setup Concern - Due to cloud machine availability, lets resort to a local setup for the example and peek into an instance of cloud deployment as well.
-- Detour: More on Docker and DockerCompose
while in this context, lets take a detour to briefly look at dockerFile vs dockerCompose
- dockerFile:
When a self-contained container or monolithic container image is to be created which suffices the functionality within itself, a docker file is used
# DockerFile using Nginx base image and website source code copy from current directory # * Build image with the docker file and run >commands below< # References: # * https://www.katacoda.com/courses/docker/create-nginx-static-web-server # Base Nginx docker image FROM nginx:alpine # Maintainer MAINTAINER Srinivas Piskala Ganesh Babu "spg349@nyu.edu" # Copy the DevOps repository contents present in the current directory to the default nginx web hosting directory COPY DevOps /usr/share/nginx/html # Usage: # * To update the website source files, # ** Clone the gcallah/DevOps repo at the same directory where dockerfile exists >git clone< # *** build a new image with this dockerfile - "docker build -f DockerFile -t devopsweb ."
- dockerCompose:
When multiple containers are required which together suffice a functionlity as a team, docker compose is used. We wont be using the below docker-compose for this example. The docker compose uses 2 containers to sync with the source code repository update the resources as well as host the website by sharing the same volume. The docker compose below is for demonstration purposes
# A Docker compose to create the application with two containers # * 1. Nginx container # * 2. git-sync container # Reference from: https://hub.docker.com/r/openweb/git-sync/ # Run with `docker-compose up -d` once the dockerCompose file is created version: "2" services: nginx: image: nginx:latest ports: - "8080:80" volumes: - website_sources:/usr/share/nginx/html:z depends_on: - git-sync restart: always git-sync: image: openweb/git-sync:0.0.1 environment: GIT_SYNC_REPO: "https://github.com/gcallah/DevOps" GIT_SYNC_DEST: "/git" GIT_SYNC_BRANCH: "master" GIT_SYNC_REV: "FETCH_HEAD" GIT_SYNC_WAIT: "10" volumes: - website_sources:/git:z restart: always volumes: website_sources: driver: local
-- Installation:
- Ensure docker is installed * Possess any virtual machine setup provision using virtualbox or vmware workstation or make cloud machines setup
-- Setup:
TestBed Type 1 : Static SourceCode: Use the dockerfile
and dockerCompose file below to establish the setup,
The Dockerfile created using the Nginx container and adding the website
source code from current directory,
# DockerFile using Nginx base image and website source code copy from current directory
# * Build image with the docker file and run >commands below<
# References:
# * https://www.katacoda.com/courses/docker/create-nginx-static-web-server
# Base Nginx docker image
FROM nginx:alpine
# Maintainer
MAINTAINER Srinivas Piskala Ganesh Babu "spg349@nyu.edu"
# Copy the DevOps repository contents present in the current directory to the default nginx web hosting directory
COPY DevOps /usr/share/nginx/html
# Usage:
# * To update the website source files,
# ** Clone the gcallah/DevOps repo at the same directory where dockerfile exists >git clone<
# *** build a new image with this dockerfile - "docker build -f DockerFile -t devopsweb ."
TestBed Type 2: Dynamic SourceCode Fetch: Use
the dockerfile and dockerCompose file below to establish the setup,
The Dockerfile is created using the UBUNTU base image with apt install
nginx to host the webpage and apt install
git to fetch the newest website resource from the repository.
# Docker file with Ubuntu Base image and performing apt install of,
# * Nginx - To host the website
# * Git - To fetch the website resource from the git repository
# References:
# * https://docs.docker.com/get-started/part2/#apppy
# * https://www.digitalocean.com/community/tutorials/docker-explained-using-dockerfiles-to-automate-building-of-images
# * https://gist.github.com/ivanacostarubio/7044770
# Latest ubuntu base image
FROM ubuntu:latest
# Maintainer
MAINTAINER Srinivas Piskala Ganesh Babu "spg349@nyu.edu"
# Apt update and install - nginx and git
RUN apt-get update
RUN apt-get install -y nginx
RUN apt-get install -y git-core
# Fetching the latest source code from the github repo of devOps
RUN git clone https://github.com/gcallah/DevOps
# Clean up of existing files in the default folder
RUN rm /var/www/html/*
# Uploading the webpages and resource to the default nginx config pointer folder
RUN cp -a DevOps/. /var/www/html/
# Expose ports
EXPOSE 80
# Nginx daemon run
CMD ["nginx", "-g", "daemon off;"]
# Usage:
# * Use the docker build command to build an image out of this docker file or pull from rep
# ** command: "docker build -f >name of the docker file< -t >tag of the image< ."
# *** repo to docker pull from: srinivas11789/devopswebsite
Common File: Docker compose file common for both the testbeds,
Ensure to change the image name with the corresponding docker image created.
# Docker Compose file to build as a compose or stack or swarm
# References:
# * https://docs.docker.com/get-started/part3/#docker-composeyml
# * https://docs.docker.com/get-started/part5/#persist-the-data
# version
version: "3"
# services declaration
services:
# Name of the service and image details -
# image - Name of the image to pull from
# port - Port to expose or map
# deploy - configuration to deploy
# replica - number of container to be replicated
devops:
image: srinivas11789/devopswebsite:v1
ports:
- "80:80"
deploy:
mode: replicated
replicas: 3
labels: [APP=devops_website]
restart_policy:
condition: on-failure
Steps for Usage:
* Use docker-machine command line to use the virtual box driver and create virtual machines for the cluster. For the demo, lets create three virtual machines. One being the master and the other two workers. Use the following command to perform the required operation
docker-machine create --driver virtualbox myvm1
docker-machine create --driver virtualbox myvm2
docker-machine create --driver virtualbox myvm3
Command line output sample:
MacBook-Pro:devOps deployment$ docker-machine create --driver virtualbox myvm1
Running pre-create checks...
Creating machine...
(myvm1) Copying /Users/darkknight/.docker/machine/cache/boot2docker.iso to /Users/darkknight/.docker/machine/machines/myvm1/boot2docker.iso...
(myvm1) Creating VirtualBox VM...
(myvm1) Creating SSH key...
(myvm1) Starting the VM...
(myvm1) Check network to re-create if needed...
(myvm1) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
* List the virtual machines to look at the created vms.
docker-machine ls
Command line output sample:
MacBook-Pro:devOps deployment$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
myvm1 - virtualbox Running tcp://192.168.99.100:2376 v18.03.1-ce
myvm2 - virtualbox Running tcp://192.168.99.101:2376 v18.03.1-ce
myvm3 - virtualbox Running tcp://192.168.99.102:2376 v18.03.1-ce
* The virtual machines can be connected using ssh to execute commands and make proper swarm setup. Consider the "vm1" machine as the master. Initiating the swarm from node1 "vm1" would make the current node as master
docker-machine ssh myvm1 "docker swarm init --advertise-addr <myvm1 ip>"
Command line output sample:
MacBook-Pro:devOps deployment$ docker-machine ssh myvm1 "docker swarm init --advertise-addr 192.168.99.100"
Swarm initialized: current node (lhys3d8lpqc1onnikx5t9jaep) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-0u40ts3aguqoa4jmt7rmrxh1gpkujs2fahhdmi55uz0eum3bxi-cwaxy2whk4pq2sxpg0g7prao0 192.168.99.100:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
Next, the worker nodes have to configured such that they recognize the swarm configuration and know the master. Make sure to use the token and ip of the master node generated when swarm was initialized
docker-machine ssh myvm2 "docker swarm join --token <token> <ip>:2377"
Command line output sample:
MacBook-Pro:devOps deployment$ docker-machine ssh myvm2 "docker swarm join --token SWMTKN-1-0u40ts3aguqoa4jmt7rmrxh1gpkujs2fahhdmi55uz0eum3bxi-cwaxy2whk4pq2sxpg0g7prao0 192.168.99.100:2377"
This node joined a swarm as a worker.
* Perform similar operation for myvm3
* Use the docker-machine shell environment of myvm1 >MASTER< as per instructions at dockerSwarm reference
MacBook-Pro:devOps deployment$ docker-machine env myvm1
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/Users/darkknight/.docker/machine/machines/myvm1"
export DOCKER_MACHINE_NAME="myvm1"
# Run this command to configure your shell:
# eval $(docker-machine env myvm1)
MacBook-Pro:devOps deployment$ eval $(docker-machine env myvm1)
Command line output sample - shows master is active
MacBook-Pro:devOps deployment$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
myvm1 * virtualbox Running tcp://192.168.99.100:2376 v18.03.1-ce
myvm2 - virtualbox Running tcp://192.168.99.101:2376 v18.03.1-ce
myvm3 - virtualbox Running tcp://192.168.99.102:2376 v18.03.1-ce
* Deploy the docker compose over the swarm
MacBook-Pro:devOps deployment$ docker stack deploy -c docker-compose.yml devopswebsite
Output:
Creating network devopswebsite_default
Creating service devopswebsite_devops
MacBook-Pro:devOps deployment$ docker stack ps devopswebsite
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
0cme6z0e9abz devopswebsite_devops.1 srinivas11789/devopswebsite:v1 myvm3 Running Preparing 12 seconds ago
sfrqlbtx8tes devopswebsite_devops.2 srinivas11789/devopswebsite:v1 myvm2 Running Running 12 seconds ago
mpq4i8s0zwm5 devopswebsite_devops.3 srinivas11789/devopswebsite:v1 myvm1 Running Running 12 seconds ago
* Test by connecting the docker virtual machines ip using curl or browser to view the devops website
MacBook-Pro:devOps deployment$ curl http://192.168.99.100
MacBook-Pro:devOps deployment$ curl http://192.168.99.101
MacBook-Pro:devOps deployment$ curl http://192.168.99.102