Latest update: January 7, 2026
Overview
The purpose of this lab is to provide a concrete overview of the main Linux facilities that are used to run containers.
This lab session is mainly based on observations. There is no point in rushing through the different steps. Actually, this is quite the contrary. The lab will only be interesting and insightful if you take the time to understand each step (and the corresponding commands / code lines) in details, using additional/external documentations such as Linux man pages when necessary.
This lab is made of 3 parts:
- Part 1 (main part) provides an overview of the main container concepts using only shell commands.
- Part 2 (optional) provides more details about the implementation of container management tools.
- Part 3 (optional) provides more details about other topics such as container images.
Part 1: Linux containers in (less than) 100 lines of shell
Introduction
-
This part is essentially based on a presentation given by Michel Kerrisk on January 2025.
-
All the detailed information to understand the code and run it step by step are available in the slides of the talk.
-
A video recording of the presentation is also available. Watching the video is insightful but not necessary to follow the tutorial (the slides and the additional guidelines below are self-contained).
-
All the files needed for this lab/demo are available in the
conshdirectory, within the “TLPI” code archive available from the web page of Michael Kerrisk here: https://man7.org/tlpi/code/download/tlpi-260105-dist.tar.gz. Note that:- the corresponding web page provides two versions of the archive: “distribution” version and “book” version. It is important to use the “distribution” version.
- many excerpts of the code are also shown directly in the slides.
-
The lab will consist in exploring all the main steps of the talk, including the part about networking (slides 71-75, not discussed in the video). You can however skip the last part about running a container inside a container (slides 77-86), which is a little more complex to run in practice.
-
This lab must be done on a Linux (physical or virtual) machine, on which you can have superuser privileges. There is no major constraint regarding the Linux distribution or the Linux kernel version (in other words, it should work on any reasonably recent Linux distribution) but it has only been fully tested on Ubuntu 24.04.
-
We will mainly use basic/standard Linux utility programs in this lab. The only significant dependency that we will need is Busybox. You should check if this package (typically named
busybox-static) is already installed on your system and install it otherwise.
The container that we will create will be used to execute a Busybox shell. Before starting the lab, it is important to spend a few minutes to understand what is Busybox and why it was chosen for this lab. The first slides of the talk should be helpful in this regard. You can also take a look at the Busybox web site and its Wikipedia page.
Running the code
Note that this lab requires to have superuser on the host Linux machine. (However, it is not necessary to execute all the commands with such privileges. As you will see, only a subset of the commands must be run with sudo.)
First, read the guidelines provided by M. Kerrisk (initial author) in the README file provided in the archive (inside the consh directory). Note also that there are some useful comments included in each of the script files.
Below, we provide some additional details to complement this README file.
Of course, to understand each step, it is also very important to read the slides of the talk in details and try the recommended observation. Before executing each of the different scripts, we strongly advise you to read in depth its code as well as all the corresponding slides.
We will store all the files (that will be needed for the execution of the container) in a subfolder of the consh directory, which we will create and name demo (as shown below). we will delete this subfolder at the end of the lab experiments.
Note: If you are doing this lab inside a virtual machine (VM), we recommend storing all the files inside the filesystem of the VM (rather than in a shared folder of the host exported to the VM). This will help avoiding various kinds of issues.
Note: Running the second script (consh_setup.sh – see below) on some recent Linux distributions may return an error. Indeed, the creation of the namespaces may fail because, for security reasons, certain recent Linux distributions use default settings that prevent an unprivileged user from creating a user namespace (the error message may typically look like this: "unshare: write failed /proc/self/uid_map: Operation not permitted").
This is, for example, the case of Ubuntu 24.04, which uses the AppArmor security mechanism to enforce this policy (more details are available here).
In the case of Ubuntu, to disable this restriction (system wide), you can use the following command:
sudo sysctl kernel.apparmor_restrict_unprivileged_userns=0
It is also possible to make this change persistent by adding the following line in the /etc/sysctl.conf file (warning – this is not recommended):
kernel.apparmor_restrict_unprivileged_userns = 0
Creating and launching the container
Run the following commands from a shell inside the consh directory:
# Create and enter the folder to be used for the container filesystem:
mkdir demo
cd demo
# Create and populate directory corresponding to the lower OverlayFS layer:
../create_lowerfs.sh lower
# Perform the remaining steps to prepare the container and launch it.
# This includes the following steps:
# - Finish the OverlayFS setup (the mount point will be: demo/ovly/merged)
# - Create and use a cgroup (named "consh_cgrp")
# - Set the hostname to "consh-host"
# - Create a set of new namespaces and lauch a (busybox) shell
# associated to them
# - As explained in the talk/slides, the last step will also trigger
# the execution of the consh_post_setup.sh script within the container
../consh_setup.sh -v -h consh-host -c consh_cgrp lower ovly
If you do not notice any error message, then the container should be running at this stage
and you should notice that the shell prompt is now the one of Busybox, which looks like that: / #
You can now launch another shell (using another terminal) on the host. Run a few commands (such as the ones below – feel free to complete the list with other commands) in both shells (i.e., in the host and in the container) and compare the output that you obtain in each case:
hostnameidmountls -la /ps -efip a
We can also try to launch another process within the container. To keep things simple, we will create a second busybox process inside the container (but we could also run another statically-linked program). To do that, we must go through several steps, described below. These commands must be typed in a new terminal, launched in the host.
First, we must find the (real) pid of the busybox process already running inside the container. We can use the following command:
pidof busybox
Then, we can use the nsenter command as follows to launch a new busybox shell instance inside our existing container. (In the command below, replace <CONTAINERPID> with the numeric value obtained in the previous step.)
sudo nsenter -t <CONTAINERPID> -a busybox sh
You should now obtain a prompt from the new busybox shell process.
From this shell, you can type some commands (such as ps -ef and the other ones mentioned above) to check that this new process is indeed within the same container as the first busybox process.
Finally, you can terminate this second busybox shell process by typing exit.
Setting up network communication between the host and the container
Once the container is running, set up the network configuration by launching the following script from a shell outside of the container:
Notes:
- The script assumes that there is only a single Busybox process running in the system. You can check with the following command:
pidof busybox - We will be using
conshas the netns name for the required bind mount.
./consh_nw_setup.sh $(pidof busybox) consh 10.0.0.1/24 10.0.0.2/24
Inside the container shell, launch the server with the following command:
nc -l -p 50000 -e sh -c 's=; while true; do s=x$s; echo $s; sleep 1; done'
Then, from a shell outside of the container, launch the client application with the following command:
nc 10.0.0.2 50000
Once the connection with the server is established, the client should display the messages sent by the server (strings of ever-increasing length with the ‘X’ character, every second).
To go a little further, you can also:
- try to redo the same test but this time using a privileged TCP port number (i.e., below 1024) instead of the previous number (50000).
- try to run an HTTP server (using the
httpdcommand provided by Busybox) inside the container (e.g., on TCP port number 80) and request a file from acurlclient running on the host.
Cleaning up
First, in the container shell, use the following command to terminate the shell:
exit
The consh_cleanup.sh script (provided in the archive and used below) has a bug (a line is missing to correctly set up a variable).
Before running this script, you must edit it to insert the following line, just before the line rm -rf $ovly_dir:
ovly_dir=$1
There is also another bug to fix in the same script (consh_cleanup.sh), the line rmdir $uslice/cgroup (i.e., the first occurrence of the rmdir command, not the second one), must be replaced with: rmdir $uslice/$cgroup (i.e., a $ character is missing before cgroup).
Then, from a shell outside of the container in the (top-level) consh directory, run the following script.
It will delete the demo directory, as well as the cgroup.
./consh_cleanup.sh -c consh_cgrp demo
You can also run the following commands (also from the host shell) to cleanup the network configuration (this is not done by the original consh_cleanup script).
# Run the following command to list the existing veth interfaces on the host side
# (you should see an entry whose name begins with "consh-"):
ip link show type veth
# Run the following command to store the veth name in a variable:
veth_0=$(ip link show type veth | grep consh- | cut -d " " -f 2 | cut -d "@" -f 1)
# Check that the variable has been been set up properly
# (you should see a string starting with "consh-" and ending with "-0"):
echo $veth_0
# Turn off veth interface on the host side:
sudo ip link set $veth_0 down
# Delete the veth device:
sudo ip link delete $veth_0
# Check that the veth pair has been deleted.
# The following command should return a void output:
ip -c link show type veth
# Remove netns bind mount:
sudo umount /var/run/netns/consh
Finally, if you had to modify the system security settings regarding user namespaces, you can switch back to the initial configuration via the following command:
sudo sysctl kernel.apparmor_restrict_unprivileged_userns=1
Part 2 (optional): Programmatic interface
Now that you have studied the basic container mechanisms via shell commands, you can dig deeper by looking at other tutorials covering the OS programmatic interfaces leveraged within container management tools. We suggest one of the tutorials
-
“Linux containers in a few lines of code” by Serge Zaitsev (2020) - a concise tutorial and implementation in C
-
“Linux containers in 500 lines of code” by Lizzie Dixon (2016) - a more detailed tutorial, also based a C implementation. (Note: based on cgroups version 1)
-
“Gocker: A mini Docker written in Go” by Shuveb Hussain (2020) - a Golang-based tutorial
-
Deep dive into containers by William Durand (2022) - This tutorial/implementation is quite interesting because it has a broad spectrum, encompassing several kinds of tools: a container runtime, a container shim, and a container manager:
Part 3 (optional): Additional topics
To carry on your discovery of container internals, we recommend the following material:
-
Digging into the OCI Image Specification by Mihail Kirov (2022)
-
Introduction to containers - Part 8: Deep dive into container internals by Jerôme Petazzoni (date unknown): covers Various topics