Docker Best Practices

Key Takeaways

  1. Docker is used for fast and consistent application delivery. It also provides a cost-effective alternative to VMs so that we can use more of our server capacity to achieve our goals.
  2. Implementing Docker development best practices helps build secure containers and deliver reliable container-based applications.
  3. The initial stage is to select the right base image. It should be from a trusted source and keep it small.
  4. Docker images are immutable. To keep your images up-to-date and secure, rebuild your images often with updated dependencies.
  5. Secure your applications from external attacks by running programs within the container as a specified user.

Several container tools and platforms have evolved to facilitate the development and operation of containers, even though Docker has become interchangeable with containers and used by many software development companies during the development process. Protecting container-based apps developed with technologies follows similar security principles as Docker. To create more secure containers, we have assembled some of the most important Docker best practices into one blog post, making it the most thorough practical advice available. Shall we begin?

1. Docker Development Best Practices

In software development, adhering to best practices while working with Docker can improve the efficiency and reliability of your software projects. Below-mentioned best practices help you better optimize images, measure security in the docker container runtime, and host OS, along with ensuring smooth deployment processes and maintainable Docker environments.

1.1 How to Keep Your Images Small

Small images are faster to pull over the network and load into memory when starting containers or services. There are a few rules of thumb to keep the image size small:

  • Begin with a suitable basic image: If you require a JDK, for example, you might want to consider using an existing Docker Image like eclipse-temurin instead of creating a new image from the ground up.
  • Implement multistage builds: For example, you may develop your Java program using the maven image, then switch to the tomcat image, and finally, deploy it by copying the Java assets to the right place, all within the same Dockerfile. This implies that the built-in artifacts and environment are the only things included in the final image, rather than all of the libraries and dependencies.
  • To make your Docker image smaller and utilize fewer layers in a Dockerfile with fewer RUN lines, you can avoid using Docker versions without multistage builds. Just use your shell’s built-in capabilities to merge any commands into one RUN line, and you’ll be good to go. Take into account these two pieces. When you use the first, you get two image layers; when you use the second, you get just one.
RUN apt-get -y update
RUN apt-get install -y python

or

RUN apt-get -y update && apt-get install -y python
  • If you have multiple images with a lot in common, consider creating your base image with the shared components, and basing your unique images on that. Docker only needs to load the common layers once, and they are cached. This means that your derivative images use memory on the Docker host more efficiently and load faster.
  • To keep your production image lean but allow for debugging, consider using the production image as the base image for the debug image. Additional testing or debugging tools can be added on top of the production image.
  • Whenever deploying the application in different environments and building images, always tag images with useful tags that codify version information, intended destination (prod or test, for instance), stability, or other useful information. Don’t rely on the automatically created latest tag.

1.2 Where and How to Persist Application Data

  • Keep application data out of the container’s writable layer and away from storage drivers. Compared to utilizing volumes or bind mounts, storing data in the container’s writable layer makes it larger and less efficient from an I/O standpoint.
  • Alternatively, use volumes to store data.
  • When working in a development environment, bind mounts can be useful for mounting directories such as source code or newly produced binaries into containers. Instead of mounting a volume in the same spot as a bind mount during development; use that spot for production purposes.
  • During production, it is recommended to utilize secrets for storing sensitive application data that services consume, and configs for storing non-sensitive data like configuration files. You may make use of these capabilities of services by transforming from standalone containers to single-replica services.

1.3 Use CI/CD for Testing and Deployment

Use Docker Hub or another continuous integration/continuous delivery pipeline to automatically build, tag, and test Docker images whenever you make a pull request or check in changes to source control.

Make it even more secure by having the teams responsible for development, testing, and security sign images before they are sent into production. The development, quality, and security teams, among others, can test and approve an image before releasing it to production.

2. Docker Best Practices for Securing Docker Images

Let’s have a look at the best practices for Docker image security.

2.1 Use Minimal Base Images

When creating a secure image, selecting an appropriate base image is the initial step. Select a small, reputable image and make sure it’s constructed well.

Over 8.3 million repositories are available on Docker Hub. Official Images, a collection of Docker’s open-source and drop-in solution repositories, are among these images. Docker publishes them. Images from verified publishers can also be available on Docker.

Organizations that work with Docker produce and maintain these high-quality images, with Docker ensuring the legitimacy of their repository content. Keep an eye out for the Verified Publisher and Official Image badges when you choose your background image.

Pick a simple base image that fits your requirements when creating your image using a Dockerfile. A smaller base image doesn’t only make your image smaller and faster to download, but also reduces the amount of vulnerabilities caused by dependencies, making your image more portable.

As an additional precaution, you might want to think about creating two separate base images: one for use during development and unit testing, and another for production and beyond. Compilers, build systems, and debugging tools are build tools that may not be necessary for your image as it progresses through development. One way to reduce the attack surface is to use a minimal image with few dependencies.

2.2 Use Fixed Tags for Immutability

Versions of Docker images are often managed using tags. As an example, the most recent version of a Docker image may be identified by using the “latest” tag. But since tags are editable, it’s conceivable for many images to have the same most recent tag, which can lead to automated builds acting inconsistently and confusingly.

To make sure tags can’t be changed or altered by later image edits, you can choose from three primary approaches:

  • If an image has many tags, the build process should choose the one with the crucial information, such as version and operating system. This is because more specific tags are preferred.
  • A local copy of the images should be kept, maybe in a private repository, and the tags should match those in the local copy.
  • Using a private key for cryptographic image signing is now possible with Docker’s Content Trust mechanism. This ensures that both the image and its tags remain unaltered.

2.3 Use of Non-Root User Accounts

Recent research has shown that the majority of images, 58% to be exact, are using the root user ID (UID 0) to run the container entry point, which goes against Dockerfile recommended practices. Be sure to include the USER command to alter the default effective UID to a non-root user, because very few use cases require the container to run as root.

In addition, Openshift necessitates extra SecurityContextConstraints, and your execution environment may automatically prohibit containers operating as root.

To run without root privileges, you might need to add a few lines to your Dockerfile, such as:

  • Verify that the user listed in the USER instruction is indeed present within the container.
  • Make sure that the process has the necessary permissions to read and write to the specified locations in the file system.
# Base image
FROM alpine:3.12
 
# Create a user 'app' and assign ownership and permissions
RUN adduser -D app && chown -R app /myapp-data
 
# ... copy application files
 
# Switch to the 'app' user
USER app
 
# Set the default command to run the application
ENTRYPOINT ["/myapp"]

It is possible to encounter containers that begin as root and then switch to a regular user using the gosu or su-exec commands.

Another reason containers might use sudo is to execute certain commands as root.

Although these two options are preferable to operating as root, they might not be compatible with restricted settings like Openshift.

3. Best Practices for Local Docker

Let’s discuss Local Docker best practices in detail.

3.1 Cache Dependencies in Named Volumes #

Install code dependencies as the machine starts up, rather than baking them into the image. Using Docker’s named volumes to store a cache significantly speeds up the process compared to installing each gem, pip, and yarn library from the beginning each time we resume the services (hello NOKOGIRI). The configuration mentioned above might evolve into:

services:
  rails_app:
    image: custom_app_rails
    build:
      context: 
      dockerfile: ./.docker-config/rails/Dockerfile
    command: ./bin/rails server -p 3000 -b '0.0.0.0'
    volumes:
      - .:/app
      - gems_data:/usr/local/bundle
      - yarn_data:/app/node_modules
 
  node_app:
    image: custom_app_node
    command: ./bin/webpack-dev-server
    volumes:
      - .:/app
      - yarn_data:/app/node_modules
 
volumes:
  gems_data:
  yarn_data:

To significantly reduce startup time, put the built dependencies in named volumes. The exact locations to mount the volumes will differ for each stack, but the general idea remains the same.

3.2 Don’t Put Code or App-Level Dependencies Into the Image #

When you start a docker-compose run, the application code will be mounted into the container and synchronized between the container and the local system. The main Dockerfile, where the app runs, should only contain the software that is needed to execute the app.

You should only include system-level dependencies in your Dockerfile, such as ImageMagick, and not application-level dependencies such as Rubygems and NPM packages. When dependencies are baked into the image at the application level, it becomes tedious and error-prone to rebuild the image every time new ones are added. On the contrary, we incorporate the installation of such requirements into a starting routine.

3.3 Start Entrypoint Scripts with Set -e and End with exec “$@” #

Using entrypoint scripts to install dependencies and handle additional setups is crucial to the configuration we’ve shown here. At the beginning and end of each of these scripts, you must incorporate the following elements:

  • Next to #!/bin/bash (or something similar) at the start of the code, insert set -e. If any line returns an error, the script will terminate automatically.
  • Put exec “$@” at the end of the file. The command directive will not carry out the instructions you provide unless this is present.

4. Best Practices for Working with Dockerfiles

Here are detailed best practices for working with Dockerfiles.

4.1 Use Multi-Stage Dockerfiles

Now imagine that you have some project contents (such as development and testing tools and libraries) that are important for the build process but aren’t necessary for the application to execute in the final image.

Again, the image size and attack surface will rise if you include these artifacts in the final product even though they are unnecessary for the program to execute.

The question then becomes how to partition the construction phase from the runtime phase. Specifically, how can the build dependencies be removed from the image while remaining accessible during the image construction process? In that case, multi-stage builds are a good option. With the functionality of multi-stage builds, you can utilize several temporary images during the building process, with only the most recent one becoming the final artifact:

Example:-

# Stage 1: Build the React app
FROM node:latest as react_builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# Stage 2: Create the production image
FROM nginx:stable
COPY --from=react_builder /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

4.2 Use the Least Privileged User

Now, which OS user will be utilized to launch the program within this container when we build it and run it?

Docker will use the root user by default if no user is specified in the Dockerfile, which may pose security risks. On the other hand, running containers with root rights is usually unnecessary. This creates a security risk as containers may get root access to the Docker host when initiated.

Therefore, an attacker may more easily gain control of the host and its processes, not just the container, by launching an application within the container with root capabilities. This is true unless the container’s underlying application is susceptible to attacks.

The easiest way to avoid this is to execute the program within the container as the specified user, as well as to establish a special group in the Docker image to run the application.

Using the username and the USER directive, you can easily launch the program.

Tips: You can utilize the generic user that comes packed with some images. This way, you may avoid creating a new one. For instance, you can easily execute the application within the container using the node.js image, which already includes a generic user-named node. 

4.3 Organize Your Docker Commands

  • Combine commands into a single RUN instruction whenever possible; for instance, you can run many instructions using the && operator.
  • To minimize the amount of file system modifications, arrange the instructions in a logical sequence. For instance, group operations that modify the same files or directories together.
  • If you want to utilize fewer commands, you need to reevaluate your current approach.
  • Reducing the number of COPY commands in the Apache web server example, as described in the section Apache Web Server with Unnecessary COPY Commands is one way to achieve this.

5. Best Practices for Securing the Host OS

Below are a few best practices for securing the host OS with Docker.

5.1 OS Vulnerabilities and Updates

It is critical to establish consistent procedures and tools for validating the versioning of packages and components within the base OS after selecting an operating system. Take into consideration that a container-specific operating system may have components that could be vulnerable and necessitate fixing. Regular scanning and checking for component updates using tools offered by the operating system vendor or other reputable organizations.

To be safe, always upgrade components when vendors suggest, even if the OS package does not contain any known security flaws. You can also choose to reinstall an updated operating system if that is more convenient for you. Just like containers should be immutable, the host running containerized apps should also maintain immutability. Data should not only persist within the operating system. To prevent drift and drastically lower the attack surface, follow this best practice. Finally, container runtime engines like Docker provide updates with new features and bug fixes often. Applying the latest patches can help reduce vulnerabilities.

5.2 Audit Considerations for Docker Runtime Environments

Let’s examine the following:

  • Container daemon activities
  • These files and directories:
    • /var/lib/docker
    • /etc/docker
    • docker.service
    • docker.socket
    • /etc/default/docker
    • /etc/docker/daemon.json
    • /usr/bin/docker-containerd
    • /usr/bin/docker-runc

5.3 Host File System

Ensure that containers operate with the minimum required file system permissions. Mounting directories on a host’s file system should not be possible for containers, particularly when such folders include OS configuration data. This is not a good idea since an attacker might gain control of the host system if they were to get their hands on the Docker service, as it runs as root.

6. Best Practices for Securing Docker Container Runtime

Let’s follow these practices for the security of your Docker container runtime.

6.1 Do Not Start Containers in Privileged Mode

Unless necessary, you should avoid using privileged mode (–privileged) due to the security risks it poses. Running in privileged mode allows containers to access all Linux features and removes limitations imposed by certain groups. Because of this, they can access many features of the host system.

Using privileged mode is rarely necessary for containerized programs. Applications that require full host access or the ability to control other Docker containers are the ones that use privileged mode.

6.2 Vulnerabilities in Running Containers

You may add items to your container using the COPY and ADD commands in your Dockerfile. The key distinction is ADD’s suite of added functions, which includes the ability to automatically extract compressed files and download files from URLs, among other things.

There may be security holes due to these supplementary features of the ADD command. A Docker container might be infiltrated with malware, for instance, if you use ADD to download a file from an insecure URL. Thus, using COPY in your Dockerfiles is a safer option.

6.3 Use Read-Only Filesystem Mode

Run containers with read-only mode enabled for their root filesystems. This will allow you to quickly monitor writes to explicitly designated folders. One way to make containers more secure is to use read-only filesystems. Furthermore, avoid writing data within containers because they are immutable. On the contrary, set a specified volume for writes.

7. Conclusion

With its large user base and many useful features, Docker is a good choice to the cloud-native ecosystem and will likely remain a dominant player in the industry. In addition, Docker offers significant benefits for programmers, and many companies aspire to use DevOps principles. Many developers and organizations continue to rely on Docker for developing and releasing software. For this reason, familiarity with the Dockerfile creation process is essential. Hopefully, you have gained enough knowledge from this post to be able to create a Dockerfile following best practices.

profile-image
Mohit Savaliya

Mohit Savaliya is looking after operations at TatvaSoft and leverages his technical background to understand Microservices architecture. He showcases his technical expertise through bylines, collaborating with development teams and brings out the best trending topics in Cloud and DevOps.

Related Service

Know More about Cloud and DevOps Services

Learn More

Want to Hire Skilled Developers?


    Comments

    • Leave a message...