« Previous 1 2 3 4 Next »
Dockerizing Legacy Applications
Makeover
Volumes and Persistent Data
Managing persistent data within Docker involves understanding and leveraging volumes. Unlike a container, a volume exists independently and retains data even when a container is terminated. This characteristic is crucial for stateful applications, like databases, that require data to persist across container life cycles.
For simple use cases, you can create anonymous volumes at container runtime. When you run a container with an anonymous volume, Docker generates a random name for the volume. The following command starts a MongoDB container and attaches an anonymous volume to the /data/db
directory, where MongoDB stores its data:
docker run -d --name my-mongodb -v /data/db mongo
Whereas anonymous volumes are suitable for quick tasks, named volumes provide more control and are easier to manage. If you use docker run
and specify a named volume, Docker will auto-create it if needed. You can also create a named volume explicitly with:
docker volume create my-mongo-data
Now you can start the MongoDB container and explicitly attach this named volume:
docker run -d --name my-mongodb -v my-mongo-data:/data/db mongo
You can use named volumes to share data between containers. If you need to share data between the container and the host system, host volumes are the choice. This feature mounts a specific directory from the host into the container:
docker run -d --name my-mongodb -v /path/on/host:/data/db mongo
Here, /path/on/host
corresponds to the host system directory you want to mount.
With Docker Compose, volume specification becomes streamlined and readable, especially when dealing with multi-container, stateful legacy applications. Listing 4 shows how you could define a service in docker-compose.yml
with a named volume.
Listing 4
Sample Named Volume
services: database: image: mongo volumes: - my-mongo-data:/data/db volumes: my-mongo-data:
When you run docker-compose up
, it will instantiate the service with the specified volume.
Data persistence isn't confined to just storing data; backups are equally vital. Use docker cp
to copy files or directories between a container and the local filesystem. To back up data from a MongoDB container, enter:
docker cp my-mongodb:/data/db /path/on/host
Here, data from /data/db
inside the my-mongodb
container is copied to /path/on/host
on the host system.
Dockerizing a Legacy Web Server
Containerizing a legacy web server involves several phases: assessment, dependency analysis, containerization, and testing. For this example, I'll focus on how to containerize an Apache HTTP Server. The process generally involves creating a Dockerfile, bundling configuration files, and possibly incorporating existing databases.
The first step is to create a new directory to hold your Dockerfile and configuration files. This directory acts as the build context for the Docker image:
mkdir dockerized-apache cd dockerized-apache
Start by creating a Dockerfile that specifies the base image and installation steps. Imagine you're using an Ubuntu-based image for compatibility with your legacy application (Listing 5).
In Listing 5, the RUN
instruction installs Apache, and the COPY
instruction transfers your existing Apache configuration file (my-httpd.conf
) into the image. The CMD
instruction specifies that Apache should run in the foreground when the container starts.
Listing 5
A sample Dockerfile for Apache web server
# Use an official Ubuntu as a parent image FROM ubuntu:latest # Install Apache HTTP Server RUN apt-get update && apt-get install -y apache2 # Copy local configuration files into the container COPY ./my-httpd.conf /etc/apache2/apache2.conf # Expose port 80 for the web server EXPOSE 80 # Start Apache when the container runs CMD ["apachectl", "-D", "FOREGROUND"]
Place your existing Apache configuration file in the same directory as the Dockerfile. This configuration should be a working setup for your legacy web server. Build the Docker image from within the dockerized-apache
directory:
docker build -t dockerized-apache .
Run a container from this image, mapping port 80 inside the container to port 8080 on the host:
docker run -d -p 8080:80 --name my-apache-container dockerized-apache
The legacy Apache server should now be accessible via http://localhost:8080
.
If your legacy web server interacts with a database, you'll likely need to dockerize that component as well or ensure the web server can reach the existing database. For instance, if you have a MySQL database, you can run a MySQL container and link it to your Apache container. A tool like Docker Compose can simplify the orchestration of multi-container setups.
For debugging, you can view the logs using the following command:
docker logs my-apache-container
This example containerized a legacy Apache HTTP Server, but you can use this general framework with other web servers and applications as well. The key is to identify all dependencies, configurations, and runtime parameters to ensure a seamless transition from a traditional setup to a containerized environment.
What About a Database?
Containers are by nature stateless, whereas data is inherently stateful. Therefore databases require a more nuanced approach. In the past, running databases in containers was usually not recommended, but nowadays you can do it perfectly well – you just need to make sure the data is treated properly.
Or, you can decide not to containerize your databases at all. In this scenario, your containers connect to a dedicated database, such as an RDS instance managed by Amazon Web Services (AWS), which makes sense if your containers are running on AWS. Amazon then takes care of provisioning, replication, backup, and so on. This safe and clean solution lets you concentrate on other tasks while AWS is doing the chores. One common scenario is to use a containerized database in local development (so it's easy to spin up/tear down), but then swap out for a managed database service in production. At the end of the day, your app is using the database's communication protocol, regardless of where and how the database is running.
Dockerizing an existing database like MySQL or Oracle Database is a nontrivial task that demands meticulous planning and execution. The procedure involves containerizing the database, managing persistent storage, transferring existing data, and ensuring security measures are in place. One area where containerizing a traditional SQL database is extremely useful is in development and testing (see the "Testing" box).
Testing
You can quickly spin up a container with your database, usually containing test data, and then immediately verify if your app works properly with the database. And you can do it all without asking the Ops team to provision a database host for you.
Good examples of this usage are services in GitLab CI/CD pipelines, such as PostgreSQL [4] or MySQL [5]. In Listing 6, I use a Docker image containing Docker and Docker Compose and the usual variables defining the database, its user, and password. I also define the so-called service, based on the image postgres:16
and aliased as postgres
. In the test
job, I install the PostgreSQL command-line client and execute a sample SQL query connecting to the postgres
service defined earlier, having exported the password. The postgres
service is simply a Docker container with the chosen version of PostgreSQL conveniently started in the same network as the main container so that you can connect to it directly from your pipeline.
Listing 6
PostgreSQL in a GitLab CI Pipeline
image: my-image-with-docker-and-docker-compose variables: POSTGRES_DB: my-db POSTGRES_USER: ${USER_NAME} POSTGRES_PASSWORD: ${USER_PASSWORD} services: - name: postgres:16 alias: postgres test: script: - apt-get update && apt-get install -y postgresql-client - PGPASSWORD=$POSTGRES_PASSWORD psql -h postgres -U $POSTGRES_USER -d $POSTGRES_DB -c 'SELECT 1;'
If you choose to dockerize a database, the first step is to choose a base image. For MySQL, you could use the official Docker image available on Docker Hub. You will also find official images for Oracle Database.
The following is a basic example of how to launch a MySQL container:
docker run --name my-existing-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:8.0
In this example, the environment variable MYSQL_ROOT_PASSWORD
is set to your desired root password. The -d
flag runs the container in detached mode, meaning it runs in the background.
This quick setup works for new databases, but keep in mind that existing databases require you to import existing data. You can use a Docker volume to import a MySQL dump file into the container and then import it into the MySQL instance within the Docker container.
« Previous 1 2 3 4 Next »
Buy this article as PDF
(incl. VAT)