Evolution of a build system
It was simple in the beginning. Really.
About 10 years ago I started hacking together a cross platform Google Voice client.
I didn’t think too much about my build system at that time — I just followed what I saw others did and used an IDE to develop, debug and test.
When it was time to automate compilation and packaging, I used shell scripts and leveraged the toolchain that was installed along with the IDE.
This was perfectly adequate because I was the only one working on my app and only used two development machines: My laptop and desktop.
Every time there was an update or a new version for the IDE or the toolchain, all I had to do was to remember to install it on both my machines.
When I later added support for more platforms, I had to ensure that both my machines had the same version for all toolchains, otherwise I’d hit some very hard to explain bugs.
Maintaining all these development environments became increasingly stressful — I had started dreading working on my side project just because managing the toolchains felt like tiptoeing through a mine-field.
After suffering this for a while, I created multiple virtual machines, each with the toolchain for at most one or two target platforms.
My editor or IDE would be local to the machine I wanted to code on, but the final build and release would ALWAYS be on the virtual machine that had the right toolchain.
I soon realized that although I had compartmentalized the toolchains, I now had this bunch of VMs which I needed allocate memory and storage… not to mention the additional time I spent keeping the VMs updated.
The runtime requirements were also onerous: My desktop, which I thought was pretty beefy with 8GB RAM and 120GB SSD, could not instantiate more than three VMs at once.
Using my laptop to host the VMs was out of the question.
Lacking an easy way to duplicate my build environment to one or more ephemeral AWS instances, I just assumed this was the best I could do.
It worked and that was all that mattered.
… until the day my desktop’s hard drive crashed and I lost all my VMs.
I’m not an enterprise class user, why would I ever backup my VMs?
After about a week of going through the multiple stages of grief, I finally finished rebuilding all those VMs and setting up backups. While rebuilding the VMs I realized how brittle my entire build and release system had become even though I was using virtual machines, backups and snapshots.
It didn’t matter that my code was all open source: It was impossible for anyone else to compile it.
I had no simple mechanism to duplicate my build environment for anyone else to take a shot at helping me.
I couldn’t rationally expect any random developer to install the toolchain in the exactly the same way that I had in my VMs.
I had painted myself into a corner.
At the start of 2015, I started investigating Docker — primarily to check out what the buzz was all about.
To be completely honest, I didn’t see what the big deal was: Chroot environments were around since the last century (hah!) and at first glance Docker looked like nothing more than just a nice wrapper around a chroot.
The egotistical geek in me almost immediately dismissed Docker with the thought “I could do all that with a tiny perl script”.
Luckily, I am old enough to know not to blindly trust that voice and dug deeper into what Docker was and why people were going googly eyed about it.
I think my first “aha” moment was when I tried compiling Docker. That was when I was first introduced to docker-ception: Docker uses docker to compile docker.
Here’s what you essentially do to get a new docker binary:
sudo apt-get install -y docker.io
This is in stark contrast to every other project I had tried compiling: there was no 21-step procedure to set up a development environment before it is possible to compile anything; no anxiety over library versions or compatibility, nothing.
Instead, within 15 minutes of starting, I had the new docker binary, compiled successfully at the first attempt.
I was hooked: I had to find out how Docker did this and how I could use it for myself.
Docker is … what exactly?
Docker is a whole lot of things, but to begin with, think of it as a convenient wrapper over a clean execution environment.
Any application depends on libraries or environment variables that it needs at run time to work properly. You can use docker to collect these dependencies into a “docker image” and use an instance of that image to run your application.
From a design and operations point of view, this isn’t a new concept. For example, a lot of applications use virtual machines as execution enviroments. What Docker brought was significant simplicity to the packaging and deployment of that execution environment.
A build toolchain is an excellent use case of an execution environment: its tools are installed at a fixed location, usually identified by environment variable (eg. $PATH), has configuration options stored in files and so on.
This is not to say that Docker is the right choice for every application in the world even though it is a much better alternative for at least build and release.
Hammer, meet nail
Creating the image that could build my app was fairly straightforward. To summarize:
- Identify the dependencies
- Write a Dockerfile
- Build the image and optionally push it to the Docker Hub or some registry
- Instantiate containers to compile the code
Once I established that the container image could compile my app correctly by testing the app, I tried the image out in a few different environments: My laptop, my desktop, a couple of my virtual machines, a newly spawned AWS instance, a friend’s laptop and so on.
Time taken to be able to compile on a new host: About 5 minutes. This was significantly better than the VMs I was using in terms of disk/memory requirements, spin up/spin down time and maintenance cost.
I had achieved what I set out to do.
… so I deleted my VMs. 🙂
My build system started its life as an IDE, graduated to build scripts, then to virtual machines and finally ditched VMs for containers.