Dockerizing Legacy Applications
Makeover
In the past, we ran applications on physical machines. We cared about every system on our network, and we even spent time discussing a proper naming scheme for our servers (RFC 1178 [1]). Then virtual machines came along, and the number of servers we needed to manage increased dramatically. We would spin them up and shut them down as necessary. Then containers took this idea even further: It typically took several seconds or longer to start a virtual machine, but you could start and stop a container in almost no time.
In essence, a container is a well-isolated process, sharing the same kernel as all other processes on the same machine. Although several container technologies exist, the most popular is Docker. Docker's genius was to create a product that is so smooth and easy to use that suddenly everybody started using it. Docker managed to hide the underlying complexity of spinning up a container and to make common operations as simple as possible.
Containerizing Legacy Apps
Although most modern apps are created with containerization in mind, many legacy applications based on older architectures are still in use. If your legacy application is running fine in a legacy context, you might be wondering why you would want to go to the trouble to containerize.
The first advantage of containers is the uniformity of environments: Containerization ensures that the application runs consistently across multiple environments by packaging the app and its dependencies together. This means that the development environment on the developer's laptop is fundamentally the same as the testing and production environments. This uniformity can lead to significant savings with testing and troubleshooting future releases. Another benefit is that containers can be horizontally scaled; in other words, you can scale the application by increasing (and decreasing) the number of containers.
Adding a container orchestration tool like Kubernetes means you can optimize resource allocation and better use the machines you have – whether physical or virtual. The power of container orchestration makes it easy to scale the app with the load. Because containers start faster than virtual machines, you can scale much more efficiently, which is crucial for applications that have to deal with sudden load spikes. The fact that you can start and terminate containers quickly has several other consequences. You can deploy your applications much faster – and roll them back equally quickly if you experience problems.
Getting Started
To work with Docker, you need to set up a development environment. First, you'll need to install Docker itself. Installation steps vary, depending on your operating system [2]. Once Docker is installed, open a terminal and execute the following command to confirm Docker is correctly installed:
docker --version
Now that you have Docker installed, you'll also need Docker Compose, a tool for defining and running multi-container Docker applications [3]. If you have Docker Desktop installed, you won't need to install Docker Compose separately because the Compose plugin is already included.
For a simple example to illustrate the fundamentals of Docker, consider a Python application running Flask, a web framework that operates on a specific version of Python and relies on a few third-party packages. Listing 1 shows a snippet of a typical Python application using Flask.
Listing 1
Simple Flask App
from flask import Flask app = Flask(__name__) @app.route('/') def hello_world(): return 'Hello, World!' if __name__ == '__main__': app.run(host='0.0.0.0', port=5000)
To dockerize this application, you would write a Dockerfile – a script containing a sequence of instructions to build a Docker image. Each instruction in the Dockerfile generates a new layer in the resulting image, allowing for efficient caching and reusability. By constructing a Dockerfile, you essentially describe the environment your application needs to run optimally, irrespective of the host system.
Start by creating a file named Dockerfile
(no file extension) in your project directory. The basic structure involves specifying a base image, setting environment variables, copying files, and defining the default command for the application. Listing 2 shows a simple Dockerfile for the application in Listing 1.
Listing 2
Dockerfile for Flask App ()
# Use an official Python runtime as a base image FROM python:3.11-slim # Set the working directory in the container WORKDIR /app # Copy the requirements.txt file into the container COPY requirements.txt /app/ # Install the dependencies RUN pip install --no-cache-dir -r requirements.txt # Copy the current directory contents into the container COPY . /app/ # Run app.py when the container launches CMD ["python", "app.py"]
In this Dockerfile, I specify that I'm using Python 3.11, set the working directory in the container to /app
, copy the required files, and install the necessary packages, as defined in a requirements.txt
file. Finally, I specify that the application should start by running app.py
.
To build this Docker image, you would navigate to the directory containing the Dockerfile and execute the following commands to build and run the app:
docker build -t my-legacy-app . docker run -p 5000:5000 my-legacy-app
With these steps, you have containerized the Flask application using Docker. The application now runs isolated from the host system, making it more portable and easier to deploy on any environment that supports Docker.
Networking in Docker
Networking is one of Docker's core features, enabling isolated containers to communicate amongst themselves and with external networks. The most straightforward networking scenario involves a single container that needs to be accessible from the host machine or the outside world. To support network connections, you'll need to expose ports.
When running a container, the -p
flag maps a host port to a container port:
docker run -d -p 8080:80 --name web-server nginx
In this case, NGINX is running inside the container on port 80. The -p 8080:80
maps port 8080 on the host to port 80 on the container. Now, accessing http://localhost:8080
on the host machine directs traffic to the NGINX server running in the container.
For inter-container communication, Docker offers several options. The simplest approach involves using container names as DNS names, made possible by the default bridge
network. First, run a database container:
docker run -d --name my-database mongo
Now, if you want to link a web application to this database, you can reference the database container by its name:
docker run -d --link my-database:db my-web-app
In this setup, my-web-app
can connect to the MongoDB server by using db
as the hostname.
Although useful, the --link
flag is considered legacy and is deprecated. A more flexible approach is to create custom bridge networks. A custom network facilitates automatic DNS resolution for container names, and it also allows for network isolation.
For example, you can create a custom network as follows:
docker network create my-network
Now, run containers in this custom network with:
docker run -d --network=my-network --name my-database mongo --network-alias=db docker run -d --network=my-network my-web-app
Here, my-web-app
can still reach my-database
using its name or a DNS alias, but now both containers are isolated in a custom network, offering more control and security.
For applications requiring more complex networking setups, you can use Docker Compose and define multiple services, networks, and even volumes in a single docker-compose.yml
file (Listing 3).
Listing 3
Sample docker-compose.yml File
services: web: image: nginx networks: - my-network database: image: mongo networks: - my-network networks: my-network: driver: bridge
When you run docker-compose up
, both services will be instantiated, linked, and isolated in a custom network, as defined.
As you can see, effective networking in Docker involves understanding and combining these elements: port mapping for external access, inter-container communication via custom bridge networks, and orchestration (managed here by Docker Compose).
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.