First time when I’ve started using Docker containers the obvious question that came to my mind

If Container is similar to Linux processes, but has all isolated stuff with os “emulation”, can we just Pause a container, dump it to the disk, move to another server and Resume it as it is nothing happened on a software side?

Surprisingly it is actually possible! But very few companies using that capability, mostly there is not a big use-case in that cool feature.

Let’s brake this down and understand how it’s actually works.

Docker experemental features

Docker CLI with Docker Engine combination is giving tons of CLI commands that could be used almost for any use-case, however most interesting commands and features available under Experemental flag, which is needed to tell Docker engine, that we are not afraid of experimenting little bit. In terms of usability, Experemental features are just an interesting and cool capabilities which are still don’t have the correct business use-cases or applications, that’s why Docker is not going to make it stable in the near feature 😞

To activate experemental features on Docker Engine, you need to find main configuration file on your system.

# /etc/default/docker
DOCKER_OPTS="--experimental=true"

# /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --experimental=true

General idea is to add — experimental=true argument to Docker Daemon service. This will activate all experemental stuff. Hopefully it wouldn’t crash anything ✊

The idea of having checkpoints

Starting from Docker 1.13 as an experemental feature we have checkpoint CLI command, which is actually doing the following

  • Pausing container

  • Dumping memory to disk with specified path

  • Saving some configuration file with the name of checkpoint

  • Killing a container

This means that we actually having paused container with all his configurations dumped and saved to the file system!

~# docker checkpoint create [Container ID] [Checkpoint Name]

Now when we have container paused and saved to disk, we can just copy container to another server and resume it there. BUT here is the trick: we actually need to have exactly same image, and just create a container from that image. We actually don’t need to execute that container just create.

That’s it! Now we can just restore a container from our checkpoint and start it, as it was never paused

~# docker start --checkpoint [Checkpint Name] --checkpoint-dir=[DIR] [Container Name]

**NOTE: **to give this functionality Docker actually using CRIU for handling process pause and dump functionality inside container. You have to install it on your computer/server.

Let’s make a Demo!

For demonstrating basic checkpoint usage, I’ll get very simple just a plain Ubuntu docker container, because we actually just need a long running task to make sure we have all stuff executing.

Ok let’s get base Ubuntu container image, and run a container with some long running task like tail -f /dev/null

# You can use anything that actually has "tail" command in it

~# docker pull ubuntu:16.04
~# docker run -d --name checktest ubuntu:16.04 tail -f /dev/null

This will run a container with name checktest and will just stay as a silent background task.

So now we want to make a checkpoint from this container, which will be pointing to specific filesystem directory, that will be basically easier to copy or transfer for the test.

~# docker checkpoint create checktest checkpoint1 --checkpoint-dir=/tmp

Boom! Now inside directory /tmp/[Container ID]/checkpoints/checkpoint1 we have paused dump of our checktest container. Making basic tar archive from it, and coping to another server

~# tar czf test_checkpoints.tar.gz  /tmp/590cf181316a0f4329c4ae97478be97a612cd0ece471bb53ed9b083c80dcd47e/checkpoints

# Use some other method to transfer :)
~# scp test_checkpoints.tar.gz testuser@some-server.com:/tmp

Now on remote server we need to create container with the same name, and start it using our checkpoint, extracted from tarball.

# Extracting checkpoint files
~# cd /tmp && tar xzf test_checkpoints.tar.gz

# creating container with the same name, but not running it
~# docker create --name checktest ubuntu:16.04

# starting container from checkpoint
~# docker start --checkpoint checkpoint1 --checkpoint-dir=/tmp/checkpoints checktest

We are basically DONE! Container should be executing normally, and the software inside container will be executing as it was never stopped, because we literally moved entire it’s memory from server to server.

Conclusion

As a conclusion, we actually did something that is very cool in terms of curiosity and knowledge of possibility, but on the business side, there is no that match of use-cases for this. Classical microservice architecture should maintain his state from multiple points, so there is no need of migrating running container from server to server by keeping his current runtime state. BUT of-course there was some specific cases where this feature is extremely useful.

Experiment! Experiment! — Recommend! Recommend! 🎉