Docker: my first steps into containerisation
19th November, 2015
I have been intending to experiment with Docker for a very long time and I finally found the time to do so this week.
I do have some basic sysadmin experience, but I am not an opps guy, so some of the things that may have been obvious to some were baffling to me. I don't have much experience with virtualisation or hypervisors either, apart from running Vagrant and a few boxes from my Mac.
The initial install after following the Docker getting started instructions was pretty smooth, I had to upgrade Virtualbox but that was about it. One thing to note that if running on a Mac, like me, then Docker cannot run on the host directly, it has to run on a virtual Linux host, which runs through Virtualbox on your mac. This means that to use the Docker commands, you will need to be logged in to the Docker virtual machine (called default by default). There is a convenient bash script installed with Docker and it's wrapped within the app Docker Quickstart Terminal. This script attempts to start the VM (if not already running) then tries to login to the VM. If you try and use Docker commands without first running from this app you may see something like the following:
$ docker ps
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
To manually solve this, OS X also comes with Docker Machine. You can see a list of your docker machines with $ docker-machine ls
. You will probably see a machine called default. To set up the environment commands, issue $ docker-machine env default
or whatever machine you want to connect to. This will return some environment details and a command to run, probably eval $(docker-machine env default)
.
Once you're in, there's a few very useful commands
$ docker ps // for listing all running docker containers
$ docker search // search docker hub for images
$ docker images // list all locally cached docker images
$ docker run [...] // run a container (lots of useful options to apply
$ docker network [...] // manage and list networks and containers within them
$ docker stop // stop a container by it's name
$ docker exec -ti <container-name> bash // open a bash session in the already running container
$ docker rm // remove a container by it's name
I started playing with pre-existing images, namely ubuntu, php and mysql. It's very quick to run a new image, execute a bash script inside of it, or, if it has ports mapped to your host, see what's going on in a browser.
The docker run command takes lots of useful options, some I have used include
-P
automatically assign ports
-p [host-port]:[container-port]
manually expose and assign a port
-e ENVIRONMENT_VARIABLE=<value>
set an environment variable on the container, e.g. -e MYSQL_ROOT_PASSWORD=secret
-t -i
run and open a bash command line in the container
-d
run in detached mode, or in the background. Useful (necessary?) if you want the container to remain running indefinitely.
--name=[...]
manually assign the container name. This must be unique (you can stop/remove unused containers). Useful for when you want to refer to a specific container from a network or another container.
-v [host-volume] [container-location]
mount a local directory to the container at the specified location. Useful when you want to be updating files on the host (like your application files) and seeing the results immediately in the container
--link <container-name>:<optional-alias>
link another container to this container by name, and optionally an alias.
For a comprehensive list of options see the docs.
There's a few complications, the php image for example was missing a few common extensions like mcrypt, intl, pdo_mysql etc. But these can also be installed.
A beautiful thing about Docker is the ability to extend these existing images, run your own commands or install other software, then save that in a lightweight, readable, portable script called a Dockerfile. You can then build an image from this file, and publish this on Docker Hub - the repository for public Docker images.
Once you get beyond the process of creating a single container and wanting to do more complex tasks you may want to create a system composed of multiple containers. You can do this with docker-compose. The instructions are pretty clear, but you will create a small, readable, portable YML file which contains instructions for running one or more containers, and executing other scripts (a bit like the Dockerfile).
An example might look something like:
docker-web:
image: drupal:8
container_name: drupal-web
links:
- docker-db:mysql
ports:
- "80:80"
docker-db:
image: mysql
container_name: drupal-db
expose:
- "3306"
environment:
- MYSQL_USER=username
- MYSQL_PASSWORD=password
- MYSQL_DATABASE=database_name
- MYSQL_ROOT_PASSWORD=secret
And that is as simple as it gets. Just run docker-composer up -d and Docker will download the base images (if they aren't already cached locally) run both containers and link the database container to the web container. The above is very close to what I used to create this site. You can do more and include data volumes, which are intended for sharing data between containers (useful for migration and for backups).
One thing I'm still struggling with is when sharing local storage from the host to the Docker machine, how to get other users e.g. www-data or apache to be able to write to the volume because they don't have permissions by default.
It's been a whale of a journey (excuse the pun) but I can really see many advantages and will be interested to see how Docker matures and is adopted.
Further reading
Category: docker