Explaining Docker with three projects

Soumya Sen
9 min readAug 2, 2020

With this Tutorial we will get to know about following things –

1. Docker installation in AWS Linux machine

2. Running Docker Container

3. Docker Volume

4. Host Website using Docker Container

5. Running multiple container using Docker Compose

6. Scaling application using Docker Swarm

Project 1: Deploy a website in Docker Apache Container, and demonstrate how we can dynamically change content in the container by making changes on the host machine.

Following are the steps to solve this problem:

o Install Docker

o Pull an Apache docker image in the system

o Run an Apache container

o Bind a local directory with the docker container

o Deploy the custom-built website

Make changes in the source code of the website and verify it effects the container

Prerequisite — An Ubuntu instance in AWS

Installing Docker:

o First the apt package needs to be updated.
$ sudo apt-get update
apt package is used to download and install software in Ubuntu

o Installing Docker needs couple of commands to be executed. One way is to run all the commands one by one. Another way is to combine all the commands in .sh file and run the script and it will install everything automatically. It will install docker in the system. In this tutorial we are going to follow the second method. Following is the content of the .sh file
sudo apt-get update

sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo add-apt-repository \
“deb [arch=amd64]
https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable”

sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io

o Give executable permission to the script by following command
$ sudo chmod +x /path/to_sh

o Now execute the script by the following command and it will install docker in the system
$ ./file.sh

o Once the installation is successful, run the following command to check the status of docker and if get active(running), it means docker installation is successful and docker is running in the system
$ sudo systemctl status docker

Pull Apache Docker image:

Once docker is installed, next step is to pull Apache Docker image. Following are the steps for it.

o $ sudo docker pull httpd:latest
This will pull the latest apache httpd image to the system. If you need any specific version, then instead of latest you can specify the version.

o Run the following command to verify image is pulled in the system. This command will show the list of the available docker images.
$ sudo docker images

o Next step is to create a container from the apache image. But here we need some additional configuration. Our goal is to deploy a custom-built website in the container and if we change the source code of the website from host machine and it will change the content in the container. In this way we can control the live site without even entering inside the container. Now to do this there are two ways. One is to create a docker volume and attach it to the container. And another way is to Bind-mount. In this way we can attach a directory of the host machine to the container’s specified path. And whatever changes will be done in the directory will take effect in container as well. In this tutorial we are going to use Bind-mount.

o First create a directory in the host machine and create an index.html file (because index.html is the default site for apache) inside it.

o Once done we need to create a container attaching the local directory to its default index.html location.
$ docker container run -d -p 8000:80 — mount type=bind,source=/home/ubuntu/data,target=/usr/local/apache2/htdocs — name apache httpd:latest

Here -p 8000:80 defines that port 80 of the container will be exposed through port 8000 of the host machine
— mount type=bind defines that it is Bind-mount type of binding
source=/path defines the path of the directory in the host machine which will be attached with the container.
target defines in which directory of the container the bind directory will be attached
— name is defining the name of the container.
and httpd:latest is the image name

Once the command is successfully run you can verify using $ docker ps command. It will show all the running container

o Now if you type the public ip of the aws instance in the browser you will see that the custom-built website will be launched. Url will be public_ip_of_aws_instance:8000

o Now if you change the source code of the website in the host machine and refresh the URL mentioned in the previous step you will observe that the changes is displayed

Project 2: Deploy apache and nginx containers using Docker Compose, Apache should be exposed on Port 91 and nginx on port 92.

Following are the steps to solve this solution

o Install Docker

o Install Docker Compose

o Create Apache and Nginx container using Docker Compose with specified port

Solution –

o Install Docker in the AWS system. Refer to the above project for this solution.

o Once the docker installation is completed, next step is to install the Docker compose.
Docker Compose is a tool for defining and running Docker application in multiple containers. In the current time applications are developed in micro service architecture where an application is developed in small parts and developed separately. They run also independently, and the final application is the combination of all the independent small parts. Docker compose is extremely helpful to implement this kind of architecture where each of the services can run independently in separate container. And using a single docker compose command we can make all the containers running and get the final application available.

o $ sudo curl -L “https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)” -o /usr/local/bin/docker-compose
This command will download the latest version of the docker compose.

o Next step is to apply executable permission to the binary.
$ sudo chmod +x /usr/local/bin/docker-compose

o Now docker compose is ready for use. To verify the installation, we can check the version of the docker compose by using the following command
$ docker-compose –version

o Next step is to create docker-compose.yaml file, inside which all the configurations will be scripted. Following is the docker-compose file for this problem statement where we are going to create two docker container for apache and nginx and publishing them through port 91 and 92 respectively. Remember indentation in this file is extremely important.

version: ‘3’
 services:
 apache:
 image: httpd:latest
 ports:
 — “91:80”
 nginx:
 image: nginx:latest

version: ‘3’
services:
apache:
image: httpd:latest
ports:
- “91:80”
nginx:
image: nginx:latest
ports:
- “92:80”

o Another advantage of using docker compose is that we don’t need to pull the image separately. Pulling image, and running container with them, everything will be taken care by docker compose. Name of the Docker compose file always should be docker-compose.yaml.

o Next atep is to navigate to the directory and run the following command.
$ docker-compose up -d

o If the compose file is scripted properly, it will pull images and run the container.

o Now if you run the following command you will see two containers are running
$ docker ps

o Now if you open the following urls in browser, you will see the default page of apache and nginx are successfully loaded
public_ip_of_AWS_Instance:91 -> Apache
public_ip_of_AWS_Instance:92 -> Nginx

Project 3: Initialise a Docker Swarm Cluster, and deploy apache web server in the cluster with 3 replicas.

Following are the steps to solve this project –

o Initiate three AWS instances. One will work as docker leader and other two will serve as docker slave

o Install docker in all the instances

o Initiate docker swarm cluster in one of the instances. This instance will be docker leader.

o Join the other two instances in cluster as slave

o Create Docker network in the manager node

o Create docker container in the cluster through the manager

Solution -

o First step is launch three AWS instances and make sure the security group is configured in such a way that they can communicate with each other.

o Next is to install Docker in all the three instances. Refer the steps mentioned in the first project.
For the slave machines instead of running all installation commands u can only run $ sudo apt-get install docker.io

o Now we need to initialize docker swarm in the manager node. It’s always important to remember that in which machine we will initiate swarm, that machine will be manager node of the cluster. However, in a single cluster we can have more than one manager mode for load balancing and fault tolerance. Ideally a single cluster should have at least three manager nodes to take the load balancing into account.

o Following command will initiate docker swarm. The ip address needs to be the public ip address of your EC2 instance. You can get this ip address from your ec2 dashboard.
$ sudo docker swarm init –advertise-addr public_ip

o If the above command successfully runs, you will get the following screen.

Upon the successfully docker swarm initialisation you will get similar command marked in the puncture. This command needs to be executed in the slave machines to join them in the docker cluster.

o Next execute the command you get by doing the above step in the slave machines. You will get a message This node joined a swarm as a worker.

o This means our cluster is ready with one manager and two workers. To get the complete list run the following command in docker manager.
$ sudo docker node ls
You will get the list like following

o Now we need to create Docker network. Docker network is needed so that the containers can communicate with each other. This is mandatory to deploy multi-tier application where each of the micro services will be deployed in separate container and they need to communicate with each other to make the complete application operational. An overlay network allows the communication between the docker containers deployed by swarm and standalone docker containers. Following is the command to create overlay docker network.
$ docker network create — driver overlay overlay_network

o Now to verify whether docker network is successfully created you can run the following command which will list all the available networks and in the list you can see the network created (here I gave the name as overlay_network) by you along with some default networks. And the scope will be swarm

o Next step is to pull the apache image in the manager node.
$ sudo docker pull httpd:latest

o Now we can create apache containers in the swarm cluster service with 3 replicas (as mentioned in the problem statement). Following is the command.
$ docker service create — name demoapp — replicas 3 — network overlay_net -p 8000:80 httpd

o If the containers are successfully created you can get the following view.

o Now using the following command we can see the list of the instances which are running the demoapp service.
$ dokcer service ps service_name

Here you can see all the three nodes are running the same service container. If any of the worker node goes down the other worker nodes will take the load, but if manager node goes down the whole cluster will be down. That’s why it’s always better to have minimum 3 managers and should have extra worker nodes than the replicas available in the cluster.

--

--

Soumya Sen

Software Development Engineer in Test | DevOps Engineer | Python Developer