How To Run Docker in Docker Container [3 Easy Methods]

run docker in docker using three easy methods

In this blog, I will walk you through the steps required to run docker in docker using three different methods.

Run Docker in a Docker Container

There are three ways to achieve docker in docker

  1. Run docker by mounting docker.sock (DooD Method)
  2. dind method
  3. Using Nestybox sysbox Docker runtime

Let’s have a look at each option in detail. Make sure you have docker installed in your host to try this setup.

Method 1: Docker in Docker Using [/var/run/docker.sock]

Docker in Docker Using [/var/run/docker.sock]

What is /var/run/docker.sock?

/var/run/docker.sock is the default Unix socket. Sockets are meant for communication between processes on the same host.

Docker daemon by default listens to docker.sock. If you are on the same host where the Docker daemon is running, you can use the /var/run/docker.sock to manage containers. meaning you can mount the Docker socket from the host into the container

For example, if you run the following command, it will return the version of the docker engine.

curl --unix-socket /var/run/docker.sock http://localhost/version

Now that you have a bit of understanding of what is docker.sock, let’s see how to run docker in docker using docker.sock

To run docker inside docker, all you have to do is run docker with the default Unix socket docker.sock as a volume.

For example,

docker run -v /var/run/docker.sock:/var/run/docker.sock \
           -ti docker

Just a word of caution: If your container gets access to docker.sock, it means it has more privileges over your docker daemon. So when used in real projects, understand the security risks, and use it.

Now, from within the container, you should be able to execute docker commands for building and pushing images to the registry.

Here, the actual docker operations happen on the VM host running your base docker container rather than from within the container. Meaning, even though you are executing the docker commands from within the container, you are instructing the docker client to connect to the VM host docker-engine through docker.sock

To test his setup, use the official docker image from the docker hub. It has docker the docker binary in it.

Follow the steps given below to test the setup.

Step 1: Start the Docker container in interactive mode mounting the docker.sock as volume. We will use the official docker image.

docker run -v /var/run/docker.sock:/var/run/docker.sock -ti docker

Step 2: Once you are inside the container, execute the following docker command.

docker pull ubuntu

Step 3: When you list the docker images, you should see the Ubuntu image along with other docker images in your host VM.

docker images

Step 4: Now create a Dockerfile inside the test directory.

mkdir test && cd test
vi Dockerfile

Copy the following Dockerfile contents to test the image build from within the container.

FROM ubuntu:18.04

LABEL maintainer="Bibin Wilson <[email protected]>"

RUN apt-get update && \
    apt-get -qy full-upgrade && \
    apt-get install -qy curl && \
    apt-get install -qy curl && \
    curl -sSL https://get.docker.com/ | sh

Build the Dockerfile

docker build -t test-image .

docker.sock permission error

While using docker.sock you may get permission denied error. In that case, you need to change the docker.sock permission to the following.

sudo chmod 666 /var/run/docker.sock

Also, you might have to add the –privileged flag to give privileged access.

The doccker sock permission gets reset server restarts. To avoid this you need to add the permission to system startup scripts.

For example, you can add the command to /etc/rc.local so that it runs automatically every time your server starts up.

Also, Keep in mind that 666 permissions open a security hole. Consult with your security team before implementing in production-level projects.

Method 2: Docker in Docker Using DinD

Docker in Docker Using dind

This method actually creates a child container inside a Docker container. Use this method only if you really want to have the containers and images inside the container. Otherwise, I would suggest you use the first approach.

For this, you just need to use the official docker image with dind tag. The dind image is baked with the required utilities for Docker to run inside a docker container.

Follow the steps to test the setup.

Note: This requires your container to be run in privileged mode.

Step 1: Create a container named dind-test with docker:dind image

docker run --privileged -d --name dind-test docker:dind

Step 2: Log in to the container using exec.

docker exec -it dind-test /bin/sh

Now, perform steps 2 to 4 from the previous method and validate docker command-line instructions and image build.

Method 3: Docker in Docker Using Sysbox Runtime

Docker in Docker Using Sysbox Runtime

Methods 1 & 2 have some disadvantages in terms of security because of running the base containers in privileged mode. Nestybox tries to solve that problem by having a sysbox Docker runtime.

If you create a container using Nestybox sysbox runtime, it can create virtual environments inside a container that is capable of running systemd, docker, kubernetes without having privileged access to the underlying host system.

Explaining sysbox demands significant comprehension so I’ve excluded from the scope of this post. Please refer to this page to understand fully about sysbox

To get a glimpse, let us now try out an example

Step 1: Install the sysbox runtime environment. Refer to this page to get the latest official instructions on installing sysbox runtime.

Step 2: Once you have the sysbox runtime available, all you have to do is start the docker container with a sysbox runtime flag as shown below. Here we are using the official docker dind image.

docker run --runtime=sysbox-runc --name sysbox-dind -d docker:dind

Step 3: Now take an exec session to the sysbox-dind container.

docker exec -it sysbox-dind /bin/sh

Now, you can try building images with the Dockerfile as shown in the previous methods.

Key Considerations

  1. Use Docker in Docker only if it is a requirement. Do the POCs and enough testing before migrating any workflow to the Docker-in-Docker method.
  2. While using containers in privileged mode, make sure you get the necessary approvals from enterprise security teams on what you are planning to do.
  3. When using Docker in Docker with kubernetes pods there are certain challenges. Refer to this blog to know more about it.
  4. If you plan to use Nestybox (Sysbox), make sure it is tested and approved by enterprise architects/security teams.

Docker in Docker Use Cases

Here are a few use cases to run docker inside a docker container.

  1. One potential use case for docker in docker is for the CI/CD pipeline, where you need to build and push docker images to a container registry after a successful code build.
  2. Modern CI/CD systems support Docker-based agents or runners where you can run all the build steps inside a container and build container images inside a container agent.
  3. Building Docker images with a VM is pretty straightforward. However, when you plan to use Jenkins Docker-based dynamic agents for your CI/CD pipelines, docker in docker comes as a must-have functionality.
  4. Sandboxed environments.
  5. For experimental purposes on your local development workstation.

FAQ’s

Here are some frequently asked docker-in docker questions.

Is running Docker in Docker secure?

Running docker in docker using docker.sock and dind method is less secure as it has complete privileges over the docker daemon

How to run docker in docker in Jenkins?

You can use the Jenkins dynamic docker agent setup and mount the docker.sock to the agent container to execute docker commands from within the agent container.

Is there any performance impact in running Docker in Docker?

The performance of the container doesn’t have any effect because of the methods you use. However, the underlying hardware decides on the performance.

Conclusion

In this blog, we looked at three different methods to run Docker in Docker. When using these methods in production environments, always consult your enterprise security team for compliance.

If you are using Kubernetes, you could try building Docker images using Kaniko. Which does not require privileged access to the host.

15 comments
  1. Thanks for the article. for the 3rd method, if both are run in my local and there is already a kubernetes cluster (using kind) in my local, can the worker dind deploy to the cluster (using kubectl apply -f yaml)?

  2. Hi Devopscube, We are hosting the agents on Azure redhat openshift. Getting the below issue from the azure devops pipeline while calling the container inside an agent. “””””docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?””””” I tried every technique you suggested. But it doesn’t seem to be working for me.I am unable to call the container inside an agent(container). I believe Openshift uses CRIO.

    Why don’t you use the default openshift Build capability for building docker images

    1. Hi Srinivas,

      Do the nodes have docker runtime? All these options would work only if you have docker as the underlying container runtime.

  3. Great article but not 100% following it if it would suit my senario.

    At the moment I am learning docker and portainer and looking for a way to replicate my old hosting config but using the latest tech like containers.

    I currently have 3 KVM Virtual Images run by Cloudmin each running ubuntu server 16.04 and all have their own individual IP address

    DELL R630 Host Server x.x.x.102
    |
    |_ Kvm 1
    | |_ IP x.x.x.103
    | |_ Apache2
    | |_ Mysql
    | |_ Php website
    |
    |_ Kvm 2
    | |_ IP x.x.x.104
    | |_ Nginx
    | |_ Mysql
    | |_ ASPNETCORE website
    |
    |_ Kvm 3
    | |_ IP x.x.x.105
    | |_ Apache2
    | |_ Mysql
    | |_ Php website

    All runs fine but I am going to be purchasing a new DELL server and looking at installing portainer and convert all my current websites into docker and running each image inside portainer, so my questions are:

    1 – Should I install docker/portainer on the bare metal server and do what you say, which is create a docker container inside a docker container which basically replicates having 3 VMs on the host each with docker installed and then connect to each instance with portainer edge agent

    2 – Should I just install VMs and install docker on each VM and connect to then with portainer

    3 – Is there a way that I can host all 3 hosts at bare metal level? I have tried adding 3 networks in docker and then create 3 nginx servers all needing to use port 80 but each container is on 1 of the 3 created networks and if fails

    1. Hi James, This article talks about running docker inside docker for CI/CD and testing purposes.

      For your use case, you need to try Docker swarm or Kubernetes to orchestrate containers.

  4. using the method 2 in windows, how can i connect my host folders to the container?
    i have try with docker run –privileged –name my_container -d my_image -v //c/my/host/:/app1 or -v C://my/host/:app1 but never works, is there another way? that way i can edit my python scripts in my host windows, using the UI, and automatically they will go to my docker container that is Linux.

  5. minor issue with the pull.. should probably be fixed docker-image –> docker

    docker run -v /var/run/docker.sock:/var/run/docker.sock \
               -ti docker-image
    
    1. Hi Milan,

      With method 1, ie, Runing docker by mounting docker.sock just creates a sibling container. It doesn’t really have a performance impact.

      For Nestybox method, you can directly ask the project maintainers. They will be able to give you a good explanation.

      For the DIND method also, I never heard of any significant performance impact.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like