Scale Your Docker Containers with Docker Swarm
Extending the Hive
Swarm is a clustering and scheduling tool for Docker containers. Not only does it make the clustering of containers possible, it makes it quite easy. Once deployed, that swarm will behave as if it were a single, virtual system.
Why is this important? In a word, redundancy. Should you deploy a single container for a system and that container were to go down, so, too, would follow the system. By making use of Docker Swarm, if one container goes down, the other nodes will pick up the slack. That redundancy is crucial for business uptime.
Installing Docker
Before I get into showing you how to launch a Docker swarm, I'll first install Docker and deploy a single container. I'll be demonstrating by deploying the ever-popular Nginx container on Ubuntu Server 18.04.
To begin, install Docker on Ubuntu Server. I'll be setting up three Ubuntu Server machines: one to serve as the manager and two to serve as nodes. The following steps should take place on all three machines.
Because it's always wise to update and upgrade, you can take care of that first. Remember, however, should the kernel be upgraded in the process, you'll want to reboot the server, so make sure you run the update/upgrade at a time when a reboot is possible.
Log in to Ubuntu Server and issue:
sudo apt-get update sudo apt-get upgrade -y
Once the upgrade completes, reboot (if necessary). Now you can move on to installing Docker, which is incredibly easy because Docker is found in the standard repositories. From the terminal window, issue the command
sudo apt-get install docker.io -y
which will pick up all the necessary dependencies. When installation finishes, you then need to add your user to the docker group; deploying containers with sudo
can lead to security issues, so it's best to run the docker
command as a standard user.
To add your user to the docker group, enter:
sudo usermod -aG docker $USER
With the user added, log out and log back in. You should now be able to run the docker
command as that user – no sudo
required.
Once you've taken care of these first steps on the manager and both nodes, you're ready to deploy.
Deploying a Container
Before I get into deploying a service on the cluster, it's best to review how to deploy a container with the docker
command. Because I'm focusing on Nginx, I'll deploy that as a single container. The first thing to do is pull down the latest official version of the Nginx image from Docker Hub:
docker pull nginx
From this image, you can then create as many containers as you like. To deploy Nginx as a container on your network (exposing it on both internal and external port 80), run
docker run --name NGINX -p 80:80 nginx
where is NGINX the name of the created Nginx container. If you're already using port 80 for another web server on your network, you could expose the container's internal port 80 to network port 8080:
docker run --name NGINX -p 8080:80 nginx
This command would then direct all network traffic to the server's IP address on port 8080 to the container on port 80. The problem with running the command, as illustrated above, is that you won't get your prompt back. The only way you can get back to the command line is to kill the container (Ctrl+C).
To deploy the container and get your command prompt back, you must deploy the container in detached mode, which means a container will run in the background of your terminal,
docker run --name NGINX -p 8080:80 -d nginx
so you can continue working, without having to open another terminal. This command would return your prompt back to you, ready for more work.
Now that you know how to deploy a container, you'll want to kill the Nginx deployment, so the port is available for the swarm. When you deploy a container, it will have an associated ID (a random string of characters). To discover the ID, issue the command:
docker ps -a
You will see the ID listed in the first column. If the ID in question is f9756e10a9d1, enter
docker stop f9756e10a9d1 docker rm f9756e10a9d1
to kill and delete that container. Note that you don't have to type the entire ID; the first four characters will work.
Now you are free to begin the process of creating your first Docker swarm.
Creating the Manager
The first postinstall step is to create the manager. The swarm manager is charged with receiving commands on behalf of the cluster and assigning containers to nodes. The manager uses the Raft consensus algorithm [1] to keep track of swarm states. Without this algorithm, nodes would fall out of sync, and failover wouldn't be possible. The manager is also where containers are deployed into the swarm, so once the nodes are connected, you will spend most of your Docker time working from the manager.
Before I continue, I'll take a look at the IP addresses I'll be creating:
- dockermanager at 192.168.1.71
- dockernode1 at 192.168.1.72
- dockernode2 at 192.168.1.73
Make sure to alter the hostnames and IP addresses to suit your configurations. If you're testing this on an internal network, with the same network scheme as above, you might want to start out by assigning the machines the same hostnames and IP addresses that I have for the sake of simplicity.
On the dockermanager machine, the swarm is created with a single command. Remember, the manager is at IP address 192.168.1.71, so the command for initialization is:
docker swarm init --advertise-addr 192.168.1.71
When this command completes, you'll be presented with a token (Figure 1) to be used when joining nodes. Make sure to copy that token into a file, because you will need it to join the nodes. To ensure your swarm is active, enter:
docker info
You should see, among a long list of information (Figure 2), the line Swarm: active .
Buy this article as PDF
(incl. VAT)