Securing containers with Anchore
Secure Containers
Any self-respecting DevOps engineer knows of the hidden dangers that can be found, without much hunting at all, inside container images. Handling security problems with packages stored inside images in a timely fashion is imperative to prevent a successful attack on containers that, as a result, would put your servers at risk.
The sheer volume of CVEs (Common Vulnerabilities and Exploits) found in today's popular images and listed on the CVE website [1] might surprise you.
According to a new report from the venerable Snyk [2], who knows about all things security focused, the top 10 most popular container images each boast at least 30 vulnerabilities, with node leading the way. The report, which I would recommend reading carefully and in detail, reveals some frightening information. For example, you might be surprised to learn that different strategies are required for Alpine images over other operating systems (OSs) in use within your container images. Alpine is super-popular among containers as the base OS because of its tiny footprint.
Now that you're suitably alarmed, the good news is that you can automate how you are alerted to CVEs that apply to your container images. Although you have a choice of many tools in the open source space, in this article I'll look at the open source version of a well-respected, enterprise tool called Anchore [3] that describes itself as the "only end-to-end container security and compliance platform built on open source." To keep things lightweight and portable, I will run Anchore in two containers: one for the main engine and one for the database holding useful generated information.
Fear, Uncertainty, and Doubt
To get Anchore up and running, you first need to install Docker Compose. If I'm honest, I've not always been a massive fan, but Docker Compose certainly has its place and can speed up scenarios in which you need to bring up multiple containers.
On my laptop, I'm using a Debian derivative (courtesy of Ubuntu 18.04 and Linux Mint "Tara"). When installing Docker Compose, numerous packages are promised (Listing 1; note that the application relies heavily on Python).
Listing 1
Docker Compose Packages
$ apt install docker-compose The following NEW packages will be installed... docker-compose golang-docker-credential-helpers python-backports.ssl-match-hostname python-cached-property python-certifi python-chardet python-docker python-dockerpty python-dockerpycreds python-docopt python-enum34 python-funcsigs python-functools32 python-idna python-ipaddress python-jsonschema python-mock python-pbr python-pkg-resources python-requests python-texttable python-urllib3 python-websocket python-yaml
Having installed Docker Compose and its dependencies, the next step is to create a local directory at which to point the Docker container. In terms of file paths, I'm going to go a little off-piste here from the documentation. I'm assuming, of course, that you're working in the root
user's home directory. Note that you should create another subdirectory and work from that if you don't want the db/
directory created in the top level of the root home directory:
$ mkdir -p /root/config
Now that you have a home for the main configuration file and those files created when your Docker containers come up, you can download a configuration file [4] from GitHub and save it as config.yaml
within that local config/
directory.
I'm not going to fiddle with the default login details. (See the "Troubleshooting" box for some tips if you get an Unauthorized
error after modifying your credentials). The snippet below shows the two fields (password
and email
) I would edit in the config/config.yaml
configuration file:
credentials: users: admin: password: 'franticfrenetic' email: 'chris@binnie.tld'
Troubleshooting
If you get stuck, you can try a couple of things. If you're having trouble with the installation of the pip package, then check the dependencies I had to install first and note the package that I had to downgrade, as well.
Otherwise, if you can't connect to the Anchore service, you might want to check that it is listening correctly and indeed using the credentials you expect it to use by entering the docker ps
command to determine your anchore-engine
container's hash and using it to change the hash beginning ab48a
in the following command, which came directly from the Anchore GitHub page [5],
$ docker exec -t -i ab48a716916f curl -u admin:<foobar> http://localhost:8087/v1/status
where <foobar>
is your password. Figure 1 shows an all good
report, because the service apparently is responding correctly. Note that if you see Unauthorized
, you have probably just messed up the environment variables, the settings in your configuration file, or both.
Of course, if all else fails, you can kill your containers, carefully remove your database directory (e.g., db/
here),
rm -rf /root/db
and start again with a clean config.yaml
file.
Here, I've personalized the credentials with my own administrative login details.
Always Something
Before Docker Compose would behave properly, I had to get past an odd error I hadn't see before; the error (abbreviated here) looked something like:
/usr/lib/python2.7/dist-packages/ requests/__init__.py:80: RequestsDependencyWarning: urllib3 (1.24.1) or chardet (3.0.4) doesn't match a supported version! RequestsDependencyWarning)
To get it working I had to downgrade urllib3 :
$ pip install --upgrade "urllib3==1.22"
Next, I downloaded the main docker-compose.yaml
file from Anchore's GitHub pages [6] to the top-level of /root
.
Pick It, Pack It, Fire It Up
Now I'm just about set to fire my freshly installed Docker Compose up, and once it's running I'll get a chance to pick which images Anchore will analyze:
$ docker-compose pull
As you can see in Figure 2, the images anchore-engine:latest
and postgres:9
are pulled down when Docker Compose runs the pull
command. The command seems completely successful, so the next command I need to run will fire up Anchore:
$ docker-compose up -d
As you can see from the output in Figure 3, in true spinning-many-plates style, Docker Compose will also fire up the Postgres database at the same time.
Note that both anchore-db_1
and anchore-engine_1
are needed to start Anchore's service. If you want to bring the services down later, use the simple command:
$ docker-compose down
After starting Anchore, you'll need to wait a minute or two depending on the specification of your server or laptop before the Anchore service is fully accessible.
You can, of course, use standard Docker commands to check the status of Anchore as you wait for the service and database to finish coming up. With the docker ps
command, I can see that my anchore-engine
container has a hash that begins with 0f8a0, so running the command in Listing 2 as I wait for the service to finish starting reveals the internal logging within the container. (I had to abbreviate the output for clarity.)
Listing 2
Viewing the Log
$ docker logs 0f8a0 [INFO] Syncing group took 5.293130159378052 sec [INFO] Processing group: alpine:3.4 "172.20.0.3" "POST /v1/queues/watcher_tasks/is_inqueue HTTP/1.1" 200 3 "-" "python-requests/2.20.0"
Buy this article as PDF
(incl. VAT)