Docker

Follow logs on multiple containers simultaneously

Often I’m debugging multiple services at once, or one service with multiple instances.

docker ps -aqf "name=my-service" | parallel -I% --tag --line-buffer docker logs -f --since=1m %

Transfer images between machines without using a repository

Docker’s repository is convenient, but sometimes you just wish docker images were regular files that you could shuttle around on your own.

Using docker save and docker load, you can indeed treat docker images as just a regular file or byte stream.

docker save the-image | ssh user@remote-host 'docker load'

Recover log space without shutting down containers

sudo sh -c 'truncate -s 0 /var/lib/docker/containers/*/*-json.log'

This is probably about as safe as it looks, i.e. not safe. While I haven’t personally had any issues doing this in dev environments, I’d think twice before trying it in prod.

Long term your best bet is to configure your max log retention in your container’s startup.

Un-docker your docker Curiosity

docker run -ti 
    --privileged 
    --net=host --pid=host --ipc=host 
    --volume /:/host 
    busybox
    chroot /host

This drops you into a docker container that has the same networking as your host, the same process space as your host, the same filesystem as your host, and the same permissions as your host. The docker container IS your machine, you’ve essentially just opened a new shell session in a roundabout way.

Even the busybox image itself disappears because it gets chroot’d away.

I think it’s illustrative of how simple the Docker abstraction is: We often mentally model docker images as virtual machines, but they’re not. They’re really just a set of namespaces that jail a process so that it can’t see the rest of your system – unless you disable all of those namespaces, as we’ve done here.