Securing our Docker image
In a prior two-part article - Optimizing Dockerfile
for Node.js (links: Part 1, Part 2), we learnt how to optimize our Dockerfile
by:
- Reducing the number of running processes
- Ensuring proper handling of signals
- Making use of the build cache
- Using
ENTRYPOINT
- Documenting our
Dockerfile
withEXPOSE
andLABEL
- Reducing the Docker Image size
- Linting our Dockerfile
In this article, we will pick up where we left off and focus on securing our Docker image.
As it turns out, whilst we were optimizing our Dockerfile
, we've inadvertently improved the security of our image already. Specifically, when we reduced the number of processes and moved our base image from node
to node:alpine
, we reduced the number of programs that can be exploited by an attacker, thus reducing the potential attack surface.
But there are much more we can do. In this article, we will cover:
- Following the Principle of Least Privilege
- Signing and verifying Docker Images
- Vulnerability Scanning
- Use
.dockerignore
to ignore sensitive files
This article will focus on creating a secure Docker image, how to securely run our Docker image is also important, but is out of the scope of this article.
Let's begin at where we left off - with the following Dockerfile
:
DockerfileFROM node:lts as builder LABEL org.opencontainers.image.vendor=demo-frontend LABEL works.buddy.intermediate=true WORKDIR /root/ COPY ["package.json", "package-lock.json", "./"] RUN ["npm", "install"] COPY ["webpack.config.js", "./"] COPY ["src/", "./src/"] RUN ["npm", "run", "build"] RUN ["/bin/bash", "-c", "find . ! -name dist ! -name node_modules -maxdepth 1 -mindepth 1 -exec rm -rf {} \\;"] FROM node:alpine LABEL org.opencontainers.image.vendor=demo-frontend LABEL org.opencontainers.image.title="Buddy Team" WORKDIR /root/ COPY --from=builder /root/ ./ ENTRYPOINT ["node", "/root/node_modules/.bin/http-server" , "./dist/"] EXPOSE 8080
Run Containers using unprivileged (non-root) users
By default, a containerized application will run as the root
user (UID 0
) inside the container; this is the reason why we were able to copy files into /root/
and run our application from there. Another way to demonstrate this is by running the whoami
command from within the container; we can do this by overriding the entrypoint with the --entrypoint
flag when invoking docker run
.
default$ docker run --rm --name demo-frontend --entrypoint=whoami demo-frontend:oci-annotations root
All your Docker containers are started by the Docker daemon, which runs as the root
user on your host system. This is not what we are talking about here. We are talking about what user the application within the container is running as. Now, this may make you question - But they're only root
inside of the container, why does it matter?
To understand why running as root
inside a container can be insecure, we must first understand namespaces.
Understanding Namespaces
Docker provides isolation through the use of cgroups (a.k.a. control groups) and namespaces. Control groups slices up a portion of the system's resources (e.g. CPU, memory, PID), and namespaces map these host system resources with an equivalent ID/path within the container. I like to use the following analogy:
Imagine if your system is a cake. Control groups slice up the cake and distribute it to different people; namespaces try to convince you that your slice is the whole cake.
For example, Docker uses PID namespaces to 'trick' the first process within your container to think it is the init process, when, in fact, it is just a normal process on the host system. PID namespaces remaps the PID within the container to the 'real' host PID. Using PID namespaces prevents processes from different containers from communicating with each other, and to the host.
User Namesapces
However, with Docker, user namespaces are not enabled by default. This means that the root
user inside your container maps to the same root
user as your host machine.
You can enable user remapping by default by editing
/etc/docker/daemon.json
. See more details by reading Enable userns-remap on the daemon
Host Networking
User namespaces is just one example. Users of your image often opt to use host networking, which means a network namespace is not created for the container, further reducing the level of isolation.
When using host networking, Docker does not provide any isolation at all to the networking stack, which means a process within your container, running as root
, can change the firewall settings on your host, bind to privileged ports, and configure other network settings. Running as a non-privileged user will limit the amount of changes an attacker can make.
Bind Mounts
Lastly, users often uses bind mounts to synchronize files on the host with files within the container. However, doing so will exposes part of your host's filesystem to processes within the container.
For example, a common requirement for some images is to bind mount /var/run/docker.sock
. This allows the process within your container to interact with the Docker daemon on the host. However, doing so allows for the root
user, and any users within the docker
group, to break out of the container and gain root
access on the host. See the Docker Breakout video on YouTube for a demonstration of this.
Principle of Least Privilege
But regardless of whether namespaces are enabled or if bind mounts are used, you should always follow the Principle of Least Privilege. If you program doesn't need root privileges to run, then why give it root privileges in the first place? If your application, with root access, is somehow compromised, the attacker will have a much easier time discovering related services, reading sensitive information, and performing privilege escalation.
So, how do we go about running our application as a unprivileged user? There are 2 ways:
- Specify a
USER
instruction to set the default user. This is specified by the image author. - Using the
--user
/-u
flag. This is specified by the user of the image, and overrides theUSER
instruction.
Using the USER instruction
The USER
instruction allows you to specify a different user to use. By default, all node
images comes with a node
user.
default$ docker run --rm --name demo-frontend --entrypoint="cat" demo-frontend:oci-annotations /etc/passwd | grep node node:x:1000:1000:Linux User,,,:/home/node:/bin/sh
So let's use that instead of root
.
DockerfileFROM node:lts as builder LABEL org.opencontainers.image.vendor=demo-frontend LABEL works.buddy.intermediate=true USER node WORKDIR /home/node/ COPY --chown=node:node ["package.json", "package-lock.json", "./"] RUN ["npm", "install"] COPY --chown=node:node ["webpack.config.js", "./"] COPY --chown=node:node ["src/", "./src/"] RUN ["npm", "run", "build"] RUN ["/bin/bash", "-c", "find . ! -name dist ! -name node_modules -maxdepth 1 -mindepth 1 -exec rm -rf {} \\;"] FROM node:alpine LABEL org.opencontainers.image.vendor=demo-frontend LABEL org.opencontainers.image.title="Buddy Team" USER node WORKDIR /home/node/ COPY --chown=node:node --from=builder /home/node/ ./ ENTRYPOINT ["node", "/home/node/node_modules/.bin/http-server" , "./dist/"] EXPOSE 8080
First, we specified the USER node
instruction. Every instruction after this will be ran as the node
user. An exception is the COPY
instruction, which creates files and directories with a UID and GID of 0
(the root
user); that's why we used the --chown
flag to re-assign the owner of the files/directories after they've been copied. Lastly, we also changed the WORKDIR
to be the home directory for the node
user.
Build this image, and whenever it's ran, it will default to using the node
user.
default$ docker build -t demo-frontend:user-node . $ docker run --rm --name demo-frontend --entrypoint="whoami" demo-frontend:user-node node
Using the --user/-u flag
The user of your Docker image can also specify the user to use with the --user
flag. If the user exists within the container (like the node
user), you can specify the user by name; otherwise, you must specify the numeric UID and GID of the user.
default$ docker run --rm --user 4567:4567 --name demo-frontend --entrypoint="id" demo-frontend:user-node uid=4567 gid=4567
If your application does require root access within the container, but you still want to isolate it from affecting the host, you can set up user namespaces that remap the
root
user to a less-privileged user on the Docker host.
Limit who is in the docker group
In a similar vein, any user within the docker
group is able to send API requests to the Docker daemon through /var/run/docker.sock
, which runs as root
. Therefore, any user within the docker
group can, by jumping through a few hoops, run any command as root
. Therefore, you should limit the number of users who are within the docker
group on the Docker host.
Signing and Verifying Docker Images
Running your containers as an non-privileged used prevents privilege escalation attacks. However, there's a different kind of attack that can undermine all our hard work; and that's a Man-in-the-Middle (MitM) attack.
If you look inside the package-lock.json
and yarn.lock
files, you'll see that there's an integrity
field for every package, which specifies the Subresource Integrity
of the package tarball.
defaultjest-matcher-one-of@^0.1.2: version "0.1.2" resolved "https://registry.yarnpkg.com/jest-matcher-one-of/-/jest-matcher-one-of-0.1.2.tgz#4d9a428c489c55275f69e058c91fb33f61c327b7" integrity sha512-vJIXaex/pGMLPC/Td44S2CZMU4efRAFhjgG6u9Zz2ZogeJVtLStmoEkaczcgojmHCYCPIZuw10Tq3uo7VdN4Ww==
The subresource integrity is a mechanism by which the npm
client can verify that the package has been downloaded from the registry without manipulation. For a tarball, the subresource integrity is the SHA512 digest of the file. After npm
downloads the tarball, it will generate the SHA512 digest of the tarball, and if it matches with what it should be, npm
trusts that the package has not been modified.
In contrast, Docker, by default, does not verify the integrity of images it pulls - it'll implicitly trust them. This introduces a security risk whereby a malicious party can perform a Man-in-the-Middle (MitM) attack, and feed our client with images that contains malicious code.
Introducing Docker Content Trust (DCT)
Therefore, just as npm
uses subresource integrity to verify packages, Docker provides a mechanism known as Docker Content Trust (DCT). DCT is a mechanism for digitally signing and verifying images pushed and pulled from Docker registries; it allows us to verify that the Docker images we download came from the intended publisher (authenticity) and no malicious party have modified it in any way (integrity).
To enable DCT, simply set the DOCKER_CONTENT_TRUST
environment variable to 1
.
defaultexport DOCKER_CONTENT_TRUST=1
You must run
export DOCKER_CONTENT_TRUST=1
on every terminal, or to enable DCT by default, add theexport
line to your.profile
,.bashrc
, or similar files.
When pulling an image, DCT prevents clients from downloading an image unless it contains a verified signature. Now, when we try to download an unsigned image (e.g. abiosoft/caddy
), docker
will error.
default$ DOCKER_CONTENT_TRUST=1 docker pull abiosoft/caddy Using default tag: latest Error: remote trust data does not exist for docker.io/abiosoft/caddy: notary.docker.io does not have trust data for docker.io/abiosoft/caddy
On the other hand, when we try to pull a signed image (e.g. redis:5
), it will succeed.
default$ DOCKER_CONTENT_TRUST=1 docker pull redis:5 Pull (1 of 1): redis:5@sha256:9755880356c4ced4ff7745bafe620f0b63dd17747caedba72504ef7bac882089 sha256:9755880356c4ced4ff7745bafe620f0b63dd17747caedba72504ef7bac882089: Pulling from library/redis 1ab2bdfe9778: Pull complete 966bc436cc8b: Pull complete c1b01f4f76d9: Pull complete 8a9a85c968a2: Pull complete 8e4f9890211f: Pull complete 93e8c2071125: Pull complete Digest: sha256:9755880356c4ced4ff7745bafe620f0b63dd17747caedba72504ef7bac882089 Status: Downloaded newer image for redis@sha256:9755880356c4ced4ff7745bafe620f0b63dd17747caedba72504ef7bac882089 Tagging redis@sha256:9755880356c4ced4ff7745bafe620f0b63dd17747caedba72504ef7bac882089 as redis:5 docker.io/library/redis:5
Mechanism
Under the hood, DCT integrates with The Update Framework (TUF) to ensure the authenticity of the images you download. The TUF is able to uphold this requirement even in the case of the signing key being compromised. To integrate with TUF, DCT uses a tool called Docker Notary, a product of the TUF project.
Therefore, as a pre-requisite to using DCT, the Docker registry that you're pushing to must have a Notary server attached. Currently, Docker Hub and the Docker Trusted Registry (DTR, a private registry available for Docker Enterprise users) have its Notary servers at notary.docker.io, whilst content trust support is still on the roadmap for Amazon Elastic Container Registry (ECR). So we will be using Docker Hub for this article.
The workflow starts with the image author singing the image and pushing it to the repository. To do this, he/she/they must follow these steps:
- Generate a delegation key pair
- a pair of public and private key
- Add the private key to the local Docker trust repository (typically at
~/.docker/trust/
) - When using Docker Hub as the registry, tag the image using the format
<username>/<image-name>:<tag>
(e.g.d4nyll/demo-frontend:dct
) - Add the public key to the Notary server attached to the Docker registry
- Sign the image using the private key
- Push the image to the repository
Now, when a developer wants to use the signed image: draft: yes
- Docker Engine obtains the public key from the Notary server
- Uses the public key to verify the image has not been tampered with
So let's sign and distribute our demo-frontend:oci-annotations
image by following the same steps.
Signing our Image
Signing Docker images with DCT involves the use of the docker trust
command. Locally, we can generate a delegation key pair (1) and move it into the trust repository (2) using a single command - docker trust key generate <name>
, where <name>
would make up the name of the key pair files.
default$ docker trust key generate demo Generating key for demo... Enter passphrase for new demo key with ID 12d7966: Repeat passphrase for new demo key with ID 12d7966: Successfully generated and loaded private key. Corresponding public key available: ~/docker-demo-frontend/demo.pub
The private key would automatically be moved to ~/.docker/trust/private/
, and the public key would be saved in the current directory. You need to enter a passphrase for the private key, which acts as a second form of authentication - you must have possession of the private, as well as knowledge of the passphrase.
default$ ls ~/.docker/trust/private/ 12d79661f14a06444e2b7e6b265ba4a7684106c3ca71eac0410fbd02b60a0439.key
Next, we need to sign in to Docker Hub.
default$ docker login -u <username> -p <password>
Note the space before the
docker
command - this will prevent your username and password from being logged in your shell's history. Also, if you do not specify a particular registry,docker login
will default to Docker Hub.
Next, we need to tag our image using the format <username>/<image-name>:<tag>
(3)
default$ docker images demo-frontend:oci-annotations REPOSITORY TAG IMAGE ID SIZE demo-frontend oci-annotations 5aa923714af1 103MB $ docker tag 5aa923714af1 d4nyll/demo-frontend:dct
Adding the public key to the Notary server (4), signing the image (5), and pushing the image (6) can all be done using the docker push
command with DCT enabled.
default$ DOCKER_CONTENT_TRUST=1 docker push d4nyll/demo-frontend:dct The push refers to repository [docker.io/d4nyll/demo-frontend] b5f2176dd36f: Layer already exists f55c5975798b: Layer already exists f43135499101: Layer already exists 232f3b596574: Layer already exists f1b5933fe4b5: Layer already exists dct: digest: sha256:9b7ba375b469f14deac5fafdfca382c791cb212feb6a293e2f25d10398831ec3 size: 1369 Signing and pushing trust metadata Enter passphrase for repository key with ID 12b89d8: Successfully signed docker.io/d4nyll/demo-frontend:dct
As a test, let's also push the same image without signing it, using a different tag name.
default$ docker tag 5aa923714af1 d4nyll/demo-frontend:untrusted $ docker push --disable-content-trust d4nyll/demo-frontend:untrusted The push refers to repository [docker.io/d4nyll/demo-frontend] b5f2176dd36f: Layer already exists f55c5975798b: Layer already exists f43135499101: Layer already exists 232f3b596574: Layer already exists f1b5933fe4b5: Layer already exists untrusted: digest: sha256:9b7ba375b469f14deac5fafdfca382c791cb212feb6a293e2f25d10398831ec3 size: 1369
Pulling Our Image
Now, when we try to pull the unsigned image, docker
will issue and error.
default$ DOCKER_CONTENT_TRUST=1 docker pull d4nyll/demo-frontend:untrusted No valid trust data for untrusted
But pulling the signed image will succeed.
default$ DOCKER_CONTENT_TRUST=1 docker pull d4nyll/demo-frontend:dct Pull (1 of 1): d4nyll/demo-frontend:dct@sha256:9b7ba375b469f14deac5fafdfca382c791cb212feb6a293e2f25d10398831ec3 sha256:9b7ba375b469f14deac5fafdfca382c791cb212feb6a293e2f25d10398831ec3: Pulling from d4nyll/demo-frontend e7c96db7181b: Pull complete fd66aa3596b7: Pull complete 519bc7b8873f: Pull complete a29cbe9067fa: Pull complete 819e5d5df42d: Pull complete Digest: sha256:9b7ba375b469f14deac5fafdfca382c791cb212feb6a293e2f25d10398831ec3 Status: Downloaded newer image for d4nyll/demo-frontend@sha256:9b7ba375b469f14deac5fafdfca382c791cb212feb6a293e2f25d10398831ec3 Tagging d4nyll/demo-frontend@sha256:9b7ba375b469f14deac5fafdfca382c791cb212feb6a293e2f25d10398831ec3 as d4nyll/demo-frontend:dct docker.io/d4nyll/demo-frontend:dct
With signed images, the consumers of our images can be reassured that the image they download have not been tampered with. However, the image may still contain security vulnerabilities that are unaddressed by the image developers. In the next section, we will outline the process image developers can take to ensure the image they publish do not have any obvious vulnerabilities.
Vulnerability Scanning
Docker provides the Docker Security Scanning service, which will automatically scan images in your repository to identify known vulnerabilities. However, this service exists as a paid add-on to Docker Trusted Registry (DTR), a Docker Enterprise-only feature. Because most of our readers won't be paying for Docker Enterprise, we won't cover Docker Security Scanning here, but simply refer you to the documentation.
Fortunately, there are a lot of open source tools out there. In this section of the article, we will cover 2 tools - Docker Bench for Security and the Anchore Engine
Docker Bench for Security
Docker Bench for Security is a script that runs a large array of tests against the CIS Docker Community Edition Benchmark v1.1.0 - a set of guidelines that should serve as a baseline for securing our Docker installation and images. Note that Docker Bench for Security does not check any vulnerability database for the latest vulnerabilities - it only serves as a basic benchmark.
The Docker Bench for Security script is available as its own Docker image docker/docker-bench-security
, which you can run using the following command:
default$ docker run -it --net host --pid host --userns host --cap-add audit_control \ -e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \ -v /etc:/etc \ -v /usr/bin/docker-containerd:/usr/bin/docker-containerd \ -v /usr/bin/docker-runc:/usr/bin/docker-runc \ -v /usr/lib/systemd:/usr/lib/systemd \ -v /var/lib:/var/lib \ -v /var/run/docker.sock:/var/run/docker.sock \ --label docker_bench_security \ docker/docker-bench-security
Because Docker Bench for Security does only check your images, but also your Docker installation, running docker/docker-bench-security
requires a lot of privileges that you should not normally allow for containers. For instance, the --net host --pid host --userns host
flags mean to use the host's network, PID, and user namespaces, removing the isolation that these namespaces provide. You should not do this for the containers that run your applications.
The tests and results are categorized into these numbered groups:
1.x
- Host configuration - checking that a recent version of Docker is installed and all the relevant files exists2.x
- Docker Daemon configuration3.x
- Docker Daemon Configuration Files - check that the configuration files (e.g./etc/default/docker
) have the correct permissions4.x
- Container Images and Build Files - whether ourDockerfile
follows security best practices5.x
- Container Runtime - whether containers are being ran with the proper isolation using namespaces and cgroups6.x
- Docker Security Operations - a checklist of manual operations you should do regularly
For securing our Docker image, we are most interested in section 4 of the log output.
default... [WARN] 4.6 - Add HEALTHCHECK instruction to the container image [WARN] * No Healthcheck found: [d4nyll/demo-frontend:dct d4nyll/demo-frontend:untrusted] [INFO] 4.9 - Use COPY instead of ADD in Dockerfile [INFO] * ADD in image history: [d4nyll/demo-frontend:dct d4nyll/demo-frontend:untrusted] [INFO] * ADD in image history: [d4nyll/demo-frontend:dct d4nyll/demo-frontend:untrusted] ...
In 4.6
, it recommends that we add a HEALTHCHECK
instruction to the Dockerfile that allows Docker to periodically probe the container to ensure it is not just running, but healthy and functional. In 4.9
, it recommands that we use the COPY
instruction over the ADD
instruction, as ADD
can be used to copy files from remote URLs, which can be insecure. If your image does require files from remote sources, you should download them manually, verify their authenticity and integrity, before using the COPY
instruction to copy it into the image.
However, we won't be making changes to our Dockerfile
here, because HEALTHCHECK
isn't strictly security-related, and we don't have an ADD
instruction in our Dockerfile - that instruction comes from the base image (node:alpine
). But for your own images, run Docker Bench for Security to ensure you follow the baseline best practices.
Anchore Engine
The next tool we will cover is the Anchore Engine. The Anchore Engine is an open-source application that is used to inspect, analyse and certify container images. Whilst Docker Bench for Security only provides basic checks, the Anchore Engine actually consult up-to-date security vulnerability databases and test your image for the latest vulnerabilities.
Installing Anchore Engine
The Anchore Engine is composed of many services, all of which are provided as a single Docker image - anchore/anchore-engine
. So let's download it using docker pull
.
default$ docker pull docker.io/anchore/anchore-engine:latest
The Anchore Engine developers provides a Docker Compose file at /docker-compose.yaml
inside the container. We can use this Docker Compose file to deploy a quick installation of the all the constituent services that makes up the Anchore Engine. To extract this Docker Compose file, we can:
- Create a container from the
anchore/anchore-engine
image usingdocker create
(instead ofdocker run
, which will create and run the container) - Copy the
/docker-compose.yaml
file from within the container to our host machine - Destroy the container as it's no longer needed
defaultdocker create --name <arbitrary-name> docker.io/anchore/anchore-engine:latest docker cp <arbitrary-name>:/docker-compose.yaml ~/aevolume/docker-compose.yaml docker rm <arbitrary-name>
Now, to deploy Anchore Engine locally, run docker-compose up -d
default$ docker-compose up -d Pulling engine-catalog (anchore/anchore-engine:v0.4.2)... v0.4.2: Pulling from anchore/anchore-engine 5dfdb3f0bcc0: Pull complete 99f178453a43: Pull complete 407869f9917c: Pull complete 9276f4f2efa1: Pull complete e2d442bae8a6: Pull complete 68e5bf4a6762: Pull complete 5dca5ab24b88: Pull complete c0b52354123e: Pull complete Digest: sha256:17b1ec4fd81193b2d5e371aeb5fc00775725f17af8338b4a1d4e1731dd69df6f Status: Downloaded newer image for anchore/anchore-engine:v0.4.2 Creating aevolume_anchore-db_1 ... done Creating aevolume_engine-catalog_1 ... done Creating aevolume_engine-simpleq_1 ... done Creating aevolume_engine-policy-engine_1 ... done Creating aevolume_engine-api_1 ... done Creating aevolume_engine-analyzer_1 ... done
This installation runs an API server, which exposes endpoints we can hit to interact with the Anchore Engine. However, instead of sending HTTP requests, we can install and use the Anchore CLI to interface with the API on our behalf.
Installing Anchore CLI
The Anchore CLI is provided as the anchorecli
Python package. Make sure you have the pip
package manager installed, and then run:
default$ pip install anchorecli
The anchore-cli
command is now available. We can run anchore-cli image add <image>
to add our image to Anchore Engine for it to be analyse. By default, the API server we ran also sets up basic anthentication, and so we need to pass in our username and password using the --u
and --p
flags respectively (use the username admin
and password foobar
).
default$ anchore-cli --u admin --p foobar image add docker.io/d4nyll/demo-frontend:dct Image Digest: sha256:9b7ba375b469f14deac5fafdfca382c791cb212feb6a293e2f25d10398831ec3 Parent Digest: sha256:9b7ba375b469f14deac5fafdfca382c791cb212feb6a293e2f25d10398831ec3 Analysis Status: not_analyzed Image Type: docker Analyzed At: None Image ID: 5aa923714af111bae67025f2f98c6166d7f0c7c7d989a61212a09d8453f72180 Dockerfile Mode: None Distro: None Distro Version: None Size: None Architecture: None Layer Count: None Full Tag: docker.io/d4nyll/demo-frontend:dct Tag Detected At: 2019-08-29T18:30:15Z
You can confirm that the image has been added successfully by listing out all the images Anchore Engine knows about.
default$ anchore-cli --u admin --p foobar image list Full Tag Image Digest Analysis Status docker.io/d4nyll/demo-frontend:dct sha256:9b7b..1ec3 analyzing
The image has a status of analyzing
, but it can take some time for it to update its vulnerabilities from external databases and to run the analysis. In the mean time, you can run the image wait <image>:<tag>
sub-command to be updated about the state of the image.
default$ anchore-cli --u admin --p foobar image wait docker.io/d4nyll/demo-frontend:dct
Once analysed, its state will change from analyzing
to analyzed
.
defaultImage Digest: sha256:9b7ba375b469f14deac5fafdfca382c791cb212feb6a293e2f25d10398831ec3 Parent Digest: sha256:9b7ba375b469f14deac5fafdfca382c791cb212feb6a293e2f25d10398831ec3 Analysis Status: analyzed Image Type: docker Analyzed At: 2019-08-29T18:32:06Z Image ID: 5aa923714af111bae67025f2f98c6166d7f0c7c7d989a61212a09d8453f72180 Dockerfile Mode: Guessed Distro: alpine Distro Version: 3.9.4 Size: 121374720 Architecture: amd64 Layer Count: 5 Full Tag: docker.io/d4nyll/demo-frontend:dct Tag Detected At: 2019-08-29T18:30:15Z
You can now retrieve the results of the vulnerability scan by running the anchore-cli image vuln <image> <type>
subcommand. The security vulnerabilities are grouped into 3 types:
os
- vulnerabilities of the operating system and system packagesnon-os
- non-system packages, such as those from JavaScript npm packages, Ruby Gems, Java Archive files, Python pip packages, etc.all
- a combination of bothos
andnon-os
We should use the all
type to list out all vulnerabilities.
default$ anchore-cli --u admin --p foobar image vuln docker.io/d4nyll/demo-frontend:dct all
Running the command does not return anything, which means the Anchore Engine did not find any security vulnerabilities for the image at the moment. Which is reassuring!
However, our application is a very simple one. The more complicated an application and image is, the more likely it is to contain a vulnerability. For your images, make sure you use a vulnerability scanner like Anchore to scan your images before you publish them.
.dockerignore
Lastly, just like we have .gitignore
to prevent us from accidentally committing sensitive information (e.g. passwords/keys/tokens) to Git, there's a .dockerignore
file for Docker. You can add a .dockerignore
file at the root of the build context directory, and the relevant files would not be included as part of the build context, and thus won't be accidentally be included into the image via ADD
or COPY
instructions.
The syntax for .dockerignore
is very similar to .gitignore
. You specify files/directories to ignore using a newline-separated list, and the file glob patterns are matched using Go's filepath.Match
rules. For more information about .dockerignore
, refer to the documentation.
Next Steps
In this article, we've shown you how to use a non-root user to run your application inside the Docker container. We have also shown you how to use Docker Trusted Registry, Docker Bench for Security, the Anchore Engine, and .dockerignore
to secure your image, both for the image developers and the end users.
However, security is an ever-changing field, what is considered the best tools may become obsolete in the next year. This is why it's important to subscribe to community updates and be vigilent about new vulnerabilities.
We couldn't cover all the interesting security tools in our short article, and so we leave you with a list of interesting tools that helps secure Docker images for you to explore in your own time: