Docker inside a Docker Container 💻

Rahul Sil
9 min readNov 30, 2021

--

Containerization is the packaging together of software code with all the necessary libraries and other dependencies so that it can be isolated.

YouTube video explaining containers.

The softwares are isolated using containers so that it can be moved and executed consistently in any environment and on any infrastructure, independent of that environment or infrastructure’s operating system.

The container acts like a bubble or a computing environment surrounding the application and keeping it independent of its surroundings. It's basically a fully functional and portable computing environment.

The idea of process isolation has been around for years, but when Docker introduced Docker Engine in 2013, it set a standard for container use with tools that were easy for developers to use, as well as a universal approach for packaging, which then accelerated the adoption of container technology.

Today developers can choose from a selection of containerization platforms and tools that support the Open Container Initiative (OCI) standards pioneered by Docker.

Why Containerized Applications? 🤔

Software applications typically depend on other libraries, configuration files, or services that are provided by the runtime environment. The traditional runtime environment for a software application is a physical host or a virtual machine, and application dependencies are installed as a part of the host.

For example, consider a Python application that requires access to a common shared library that implements the TLS protocol. Traditionally, a system administrator installs the required package that provides the shared library before installing the Python application.

The major drawback to traditionally deployed software applications is that the application’s dependencies are entangled with the runtime environment. An application may break when any updates or patches are applied to the base operating system (OS).

For example, an OS update to the TLS shared library removes TLS 1.0 as a supported protocol. This breaks the deployed Python application because it is written to use the TLS 1.0 protocol for network requests. This forces the system administrator to roll back the OS update to keep the application running, preventing other applications from using the benefits of the updated package.

Therefore, a company developing traditional software applications may require a full set of tests to guarantee that an OS update does not affect applications running on the host.

For overcoming these dependency issues and to make the software application portable we are isolating that inside a container.

Container versus Operating System differences.

What is Docker?

Docker logo.

Docker is an open-source platform for building, deploying, and managing containerized applications.

The Docker technology uses the Linux Kernel and features of the kernel, like cgroups and namespaces, to segregate processes so that they run independently. This independence is the intention of containers - the ability to run multiple processes and apps separately from one another to make better use of your infrastructure while retaining the security you would have with separate systems.

Now as we have got to know so far that Docker is a containerization tool that helps to launch containers. So what’s the need to again launch the Docker Engine inside a docker container? 🤔

✏️ The use-cases where this comes in handy are —

🔹When we have launched a Jenkins server inside a container and for Jenkins to launch different nodes for running the CI/CD pipeline, there we will need the Jenkins from inside the container to launch other containers. Here, a Docker Engine inside a docker container comes in handy.

🔹For experimental purposes on your local development workstation.

🔹Sandboxed environments.

Before proceeding further, first, let’s get Docker installed in your system.

Go to the above link and install docker as per your Linux distribution.

For this article, I will be using Fedora 35 💻 as my base system for installing Docker Engine and going ahead with the rest of the practical.

📌 We will be using the Docker Community Edition.

✏️ For setting up Docker Engine in Fedora run the following commands -

$ sudo dnf -y install dnf-plugins-core

$ sudo dnf config-manager \
--add-repo \
https://download.docker.com/linux/fedora/docker-ce.repo
$ sudo dnf install docker-ce -y

As now everything is set up, let’s dive deep into the main practical. 🚀

There are 3 methods of doing this. Let go through them one by one. 🧑‍💻

1. Using the official docker:dind image from Dockerhub

This container image has the docker binaries already installed in it. So we just directly use it and we will be having our docker engine running inside the docker container.

This method creates child containers inside a container.

Child containers getting launched inside a container.

Steps to achieve this -

  • Pull the docker:dind image from Docker Hub.
$ sudo docker pull docker:dind
Pulling the docker:dind image from Docker hub.

Now let me show how this works by running a docker container and then running another container from that container, i.e, launching a child container.

Docker inside Docker using docker:dind method.

Command that I have used to launch the docker container —

$ sudo docker run -d --privileged --name dockerindocker docker:dind

📌 Note: This requires your container to be run in privileged mode.

Root docker container.

Here, by Root container, I mean the parent container from which the child containers will get launched.

ID of the parent container — 6fd2f677c02d

From inside that container, we are pulling the centos image and then launching a container from inside it.

Child container.

The ID of the child container — dfb2cfc11494

Now, if you return to your docker host system and run the “sudo docker ps” command, then you will see that only one container is running and that too is the parent container. The child container is running inside the parent container.

This proves that the inner container does not get launched as another container in the Docker host itself.

So, this was one of the ways of using Docker inside Docker.

2. Using the /var/run/docker.sock file

👉 What is /var/run/docker.sock ?

🔸/var/run/docker.sock is the default Unix socket.

🔸Sockets are meant for communication between processes on the same host. Docker daemon by default listens to docker.sock. If you are on the same host where the Docker daemon is running, you can use the /var/run/docker.sock to manage containers.

In this method, we will be using the docker:latest image to get the docker engine binaries in the image and mount the /var/run/docker.sock file of the docker host system with the /var/run/docker.sock file of the container.

$ sudo docker run -it \
-v /var/run/docker.sock:/var/run/docker.sock \
docker:latest

🚨 Note: If your container gets access to docker.sock, it means it has more privileges over your docker daemon. So when used in real projects, understand the security risks, and use it.

Docker in Docker using the docker.sock file.

As we can see by running the above command a container gets launched by the name of “practical_archimedes”. Now as we enter the container and run the “docker ps” command, we can see that it lists the same container.

It is because the docker command inside the container is contacting the same docker daemon in the docker host system.

Sibling containers.

When we are launching a container from inside the container, then the new container that gets launched basically gets launched as a sibling of the previous container in the Docker host system.

This can be confirmed by the output of the “docker ps” command inside the container and outside the container in the docker host.

docker ps output from inside the container.
docker ps output from the docker host.

As we can see both are the same, so we can conclude that the two containers that we have launched are the sibling containers.

In this method, we are basically manipulating the docker daemon from inside a container because of the access to docker.sock file and not actually starting a nested container from inside the container.

3. Using a base centos image

This method is similar to the first method, just that here we are using a base centos image and setting the things by ourselves and seeing what troubles we may encounter and improve our troubleshooting skills.

Step 1: Start a docker container from centos image.

$ sudo docker run -it --name dockerindocker centos:latest

Step 2: After the container has been launched, set up the yum repository inside the container and then install docker-ce.

# yum install -y yum-utils

# yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
# yum install docker-ce -y

It will take some time and the docker community edition software will get installed in the container.

Step 3: Now to start the docker engine we use the following command -

dockerd &

Now as we run this command a lot of logs will come and at the end, some error will come and the docker process start will fail.

The error will be something like — “….. Permission denied (you must be root)” at the end.

To overcome this error we have to use the “- -privileged” flag while running the container.

👉 As we have to again launch a container and then install docker again, to overcome this we can commit the container that we have just created and create an image from it.

$ sudo docker stop dockerindocker$ sudo docker commit dockerindocker dockerindocker:v1

Step 4: Now let’s start the container using the “privileged” tag.

$ sudo docker run -it --privileged dockerindocker:v1

After running the above command, a container gets launched in privileged mode. Now run the following command to test whether docker-engine starts or not.

dockerd &

If at the end of this commands output you get something like — “API listen on /var/run/docker.sock”, then it means that docker services have started successfully.

Note: Even after this you get an error which states something like “/var/lib/docker …..” then you have to use the following command -

$ sudo docker run -it --privileged \
-v /var/lib/docker \
dockerindocker:v1

This issue I did not face in Fedora 35 but faced in RHEL 8.4 system.

After this, things are the same as in Method 1. You can launch a container inside the container which will be the child container.

So, with that, we come to an end to this article. 💖

Researching on this topic has been an excellent learning journey. 💖💖

👉 If you want to know more about the good, bad and the ugly about docker in docker you can check out this below article. This helped me a lot to understand the things in a simple manner.

I hope you liked the article and got to know some interesting things that we can achieve using Docker.

I would definitely like to hear your reviews and feedbacks which will help me in improving my content in future technical blogs. 🗃️🙏

Follow me on medium as I will come up with articles on various technologies like Cloud Computing, DevOps, Automation, and their integration.

You can also check my LinkedIn profile and connect with me.

That’s all for now. Thank You !! 😊✌

--

--

Rahul Sil

I am a tech enthusiasts. I love exploring new technologies and creating stuff out of them !! ✌