Local Development Environment Configuration
Our local development environment is build on docker, to achieve:
- High decoupling from host OS
- Different services versions and configuration for each application/project
- The ability to commit the infrastructure together with the application in the same repository
- One click local setup of projects for everyone in the team
One image is worth a thousand words, so here follows a simplified depiction of our local environment model:
Basically, what we've got here is a set of containers, partaining to different projects. They are inerconnected via docker links so that each project has its own service: for example both Drupal projects in the image have dedicated MySQL and Apache/PHP containers, perfectly isolated. They can be stopped and started at will on a "by project" basis.
To keep containers sets isolated we rely on
docker-compose, a simple orchestrator that's easy to configure and run locally.
To reach each entry point, that in case of web applications is the HTTP server that exposes the app for that project, we need a resolver able to dinamically map containers to URLs when a container is started or stopped (mind a container IP is inherently dynamic so a static map won't do).
This role is carried out by a containerized service called
dnsdock, that does exactly this.
The last ingredient consists in a local resolver able to inform the system to proxy the calls for a given TLD (
.loc in our case) to dnsdock. This is a peculiar idiosincracy of Debian/Ubuntu which resolvers are managed dynamically by
network-manager service (and it's better to leave it that way to avoid many headhaches).
Different host OSes rely on different resolvers.
In particular MacOSX scheme is a bit different. Since MacOSX's kernel can't run native Linux containers, we'll need to run Linux in a virtual machine. For consistency, our choice is for Ubuntu Server in VirtualBox, provisioned automagically by
docker-machine, a useful command of the docker suite to provision and control a remote docker host as it was local (remember the
docker command is a CLI client).
On MacOSX the local host resolver is the one native to MacOSX itself, while the rest of the stack runs in a VM, where the Linux distro acts only as a containers-provider.
Run dnsdock or dinghy http proxy
If you need to re-run
dinghy-http-proxy for some reasons (maybe you have delete the pods),
you can rely on sparkdock scripts:
You should have it in your system, but in case of missing:
curl -slo /usr/local/bin/run-dnsdock https://raw.githubusercontent.com/sparkfabrik/sparkdock/master/config/ubuntu/bin/run-dnsdock curl -slo /usr/local/bin/run-dinghy-proxy https://raw.githubusercontent.com/sparkfabrik/sparkdock/master/config/ubuntu/bin/run-dinghy-proxy chmod +x /usr/local/bin/run-dnsdock chmod +x /usr/local/bin/run-dinghy-proxy
curl -slo /usr/local/bin/run-dinghy-proxy https://raw.githubusercontent.com/sparkfabrik/sparkdock/master/config/macosx/bin/run-dinghy-proxy chmod +x /usr/local/bin/run-dinghy-proxy
The guide for MacOSX is maintained by Paolo Mainardi
Automatic installation with the sparkdock privisioner (recommended way)
bash <(curl -fsSL https://raw.githubusercontent.com/sparkfabrik/sparkdock/master/bin/install.macosx)
This will provision a VirtualBox VM ready to use and will do most of the configuration required to access containers from outside the VM. Also dnsdock container will be created and activated.
Use docker toolbox: https://www.docker.com/toolbox. It will install VirtualBox + Docker + Docker Tools + Docker Machine If you already have VirtualBox, select a custom ad hoc installation and deselect VB.
After installing Docker Toolbox, use the terminal to create a new Docker machine using this command:
docker-machine create dinghy -d virtualbox --virtualbox-disk-size 50000 --virtualbox-cpu-count 1 --virtualbox-memory 4096
Adjust the settings according to your system; the command above specify:
- 50GB disk size
- 4GB ram
- 1 CPU
At the end of the installation use the
docker-machine ls command, and you should see something like this:
% docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM dinghy * virtualbox Running tcp://192.168.99.100:2376
Now you should add to the init script of your shell sessions something that automatically loads environment variable needed in order to connect to the dinghy machine. Add this lines to your .bashrc or .zshrc:
eval "$(docker-machine env dinghy)" export DOCKER_MACHINE_IP=$(docker-machine ip dinghy)
Install dnsdock with this command, that will create a container that will always start once the dinghy machine starts:
docker run --restart=always -d -v /var/run/docker.sock:/var/run/docker.sock --name dnsdock -p 172.17.42.1:53:53/udp aacebedo/dnsdock:v1.15.0-amd64
Check networking setup
After either manual or automatic installation, it's recommended to manually configure and test network setup.
Set up routing:
sudo route -n add -net 172.17.0.0 $(docker-machine ip dinghy)
Clear your DNS caches:
sudo killall -HUP mDNSResponder
Or if you are using the sparkfabrik starterkit, just run:
Test that everything is working as expected, by issuing these commands:
% docker run -d -e DNSDOCK_ALIAS=test1.redis.docker.loc --name redis-test redis:alpine % ping test1.redis.docker.loc PING test1.redis.docker.loc (172.17.42.37): 56 data bytes 64 bytes from 172.17.42.37: icmp_seq=0 ttl=63 time=0.275 ms % docker rm -vf redis-test
- create a temporary container that will instantiate a MySQL server
- ping the newly created service, using the predefined hostname, managed through dnsdock
- remove the temporary container (and service) and clean up the space occupied by the container
Our Linux systems are automatically provisioned using our internal project
you can find under your gitlab at this path
sparkfabrik/projects/ubuntu-provisioner, it is an Ansible-based
provisioner that you can always re-run to upgrade or add new packages.
Upgrade from Docker 19.03 to a recent version
This chapter is intended to be executed on a Ubuntu 20.04.x machine.
If you have an ubuntu provisioned machine before the Februrary 2021, it is possible that you are still using an old version of Docker.
In order to upgrade it:
- git clone
At the finish of this process you will have the Docker PPA installed and a version higher than 20.10.1.
You must now enable Docker buildkit as the default builder, to do that you must export the following variables in your local environment.
export DOCKER_BUILDKIT=1 export COMPOSE_DOCKER_CLI_BUILD=1
Just place them under your
.zshrc (it depends on what shell you are using).
Installing Docker engine and command
In order to install Docker, follow the official documentation at Docker's website. Instructions are available for all famous distros.
Here the documentation for Ubuntu users.
IMPORTANT: Make sure you also follow the instructions at the chapter "Create a Docker group".
HINT: On Ubuntu the official
docker-enginepackage you just installed creates the
dockergroup for you. You must ensure your user belongs to that group. You can do it with:
sudo usermod -aG docker <username>
Installing docker-compose orchestrator
Docker compose is a binary command which is not packaged for each individual OS/distro. Installing it is as easy as downloading the last binary in a shared executable path. Issue those command as root on Ubuntu, no matter the version of OS you are running.
IMPORTANT: since you need a superuser complete environment, run the following commands as root, like with
export COMPOSE_VERSION_NUMBER=1.23.1 && \ curl -L https://github.com/docker/compose/releases/download/$COMPOSE_VERSION_NUMBER/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose && \ chmod +x /usr/local/bin/docker-compose
dnsdock is a service which is automatically instructed by the docker engine every time a container is started with the option
Being it a Linux service we can leverage docker to install and start it without messing around with packages or such (nice uh?).
HINT: Make sure your user has been added to the docker group.
Run the container that will host dnsdock service
docker run --restart=always -d -v /var/run/docker.sock:/var/run/docker.sock --name dnsdock -p 172.17.0.1:53:53/udp aacebedo/dnsdock:v1.15.0-amd64
Issuing this command, docker will:
- Download the dnsdock image from the Docker Hub (a central repository for contributed containers images)
- Run a new container based in that image
- Assign a local IP to that container (by default the network used by Docker on Ubuntu is
188.8.131.52/24) on the first available on the system
- Expose the port 53/udp IN the container to the port 53/udp on the public container interface (being 184.108.40.206 as for the point above)
- Make the service restart when the host OS restarts, so it behaves like any other upstart service
- Name the container
dnsdockfor easier inspection
If you run into trouble due to port 53 being bound on the network, don't warry and read along.
Configuring systemd-resolved (Current - Ubuntu 18.04 LTS +)
NOTE: Ubuntu 18.04 LTS is the current recommended version. for which the following is not necessary. Should you need to configure a legacy version of Ubuntu (14.04 to 16.04 LTS), jump to the correct paragraph.
With the migration to
systemd, Ubuntu 18.04 adopted the
systemd-resolved service. This is a sort of catchall, multiprotocol name-resolving system that also provides a stub for DNS resolutions for local traffic.
The piece basically sits where
dnsmasq used to in former OS versions. Sadly
resolved configuration is nowhere as clean and powerful as
dnsmasq, leading to two minor problems.
- We have to change a system configuration file to make it work. This becomes a problem if you want to upgrade to the next LTS since the procedure will complain about the changed file (not a big problem but still).
- The service happens to head in a sort of race condition when new containers are spawn from time to time, so resolve new containerized service can stop working (we have a simple workaround for this).
The good news is that the procedure is simpler than
dnsmasq configuration. Just create this new file
/etc/systemd/resolved.conf.d/dnsdock.conf as superuser with the following content:
[Resolve] DNS=172.17.0.1 Domains=~loc
Workaround for resolved/dnsdock lock-up
As mentioned above,
dnsdock may end up in a deadlock. The effect is that your OS won't be able to resolve new containers started with
docker-compose. Restarting the services will suffice to make things work again:
docker restart dnsdock && sudo service systemd-resolved restart
Too bad this requires to provide superuser password.
You can alias this command to something mnemonic (say
bazooka) so that fixing things will be easy. You won't regret having this alias.
Installing / Configuring dnsmasq (Legacy - Ubuntu 14.04 to 16.04 LTS)
NOTE: This part of the guide works on Ubuntu from LTS version 14.04, up to 16.04+ LTS. The current recommended version is 18.04 LTS, for which the following is not necessary. To properly configure 18.04 LTS, jump to the correct paragraph.
Ubuntu 14.04 to 15.10 natively relies on
dnsmasq a great and simple dns-proxy which allows for very elastic configuration of the networking stack. Most of all,
dnsmasq plays very well with
resolvconf, a very dull daemon which controls local resolution maps to make sure dynamically created networks never run into conflicts with each other.
Ubuntu 16.04 ships with a default dnsmasq configuration in the
/etc folder, but the service itself is not installed by default. If you are on this OS version or if for some other reason you don't have dnsmasq installed, go forth and install it right away (don't worry, it's completely preconfigured to work transparently on a stock Ubuntu).
sudo apt-get install dnsmasq
While Ubuntu subsystem works really well, it doesn't play along with dnsdock, since dnsmasq binds on port 53/udp on 0.0.0.0.
This can even prevent docker from exposing 53/udp on the container interface. To avoid this we need to create a proper configuration for dnsmasq so that it ignores docker interfaces and leave its port 53 alone.
In addition we need to instruct dnsmasq to proxy all queries to
.loc TLD (which we use to resolve local domains in our projects) to dnsdock, which keeps the zone records for the dynamically created containers.
That way, we can avoid messing around with the complex Ubuntu's networking stack and make dnsmasq know who to ask for .loc domains.
Since we don't want our configuration to be replaced in case we upgrade the system or dnsmasq service, we'll create a partial configuration in
/etc/dnsmasq.d/dnsdock-resolver (it's a new file so nano or vim it as root).
Put what follows in that file:
server=/loc/172.17.0.1 bind-interfaces except-interface=docker0 domain-needed cache-size=0
The first line tells dnsmasq to proxy all queries for
.loc domains to dnsdock, while the second tells it not to bind to `docker0' interface, which is the one that holds all containers IPs.
sudo service dnsmasq restart
And you're done.
HINT: if you followed provious steps and had dnsmasq already running, you may have to kill and restart your dnsdock to make it bind to the now available port 53 on
docker kill dnsdock && \ docker run --restart=always -d -v /var/run/docker.sock:/var/run/docker.sock --name dnsdock -p 172.17.0.1:53:53/udp aacebedo/dnsdock:v1.15.0-amd64
HINT: if you have a local stack installed for other reasons and need to resolv a subset of
.locdomains to localhost you can change the above configuration this way
address=/loc/127.0.0.1 server=/sparkfabrik.loc/172.17.0.1 except-interface=docker0 domain-needed cache-size=0
So that all
.loc subdomains are resolved to localhost but
sparkfabrik.loc subdomains which is proxied to dnsdock.
Test and enjoy
To test everything is working as expected, we'll try to run a service in a container, exposing it through a local URL.
Do NOT execute as root, use your user to run containers
docker run -d -e DNSDOCK_ALIAS=testing.docker.with.mysql.sparkfabrik.loc -e MYSQL_ROOT_PASSWORD=root --name mysql-test sparkfabrik/docker-mysql && \ ping testing.docker.with.mysql.sparkfabrik.loc
You should see you can ping the running container smoothly (something in the lines of)
PING testing.docker.with.mysql.sparkfabrik.loc (172.17.0.37): 56 data bytes 64 bytes from 172.17.42.37: icmp_seq=0 ttl=63 time=0.275 ms
If all works, clean the test container and remove its image with
docker rm -vf mysql-test