The first time I was introduced to Makefiles was working on a rather large computational chemistry simulation software, written in Fortran 95. From a system architecture perspective, this was not a complex piece of software: it reads a bunch of text files as input, then outputs a deluge of numbers to stdout, all the while occupying 100% of the CPUs it’s allowed to touch.

Almost like a crypto-mining application.

My first thought, looking at the bizarre syntax in Makefiles, was how nice that Microsoft Visual Studio manages project build configs through buttons and UIs. That’s how life should be, this “makefile” business is so very ab initio.

As we worked on this application, and others like this (a mixture of C and python), makefiles began to grow on us. We began to put convenience actions such as running local tests, publishing binaries, even deploying to remote supercomputers.

Years later, I’d learn that what we did was a “CI/CD”.

Who still uses Fortran and C anyway?

A big part of the makefile is to manage build dependencies, and more usefully to know what code hasn’t changed, and therefore doesn’t need re-building. A well-setup makefile could speed up development and testing significantly.

Relevant xkcd from 2007...

Managing compilation dependency is hardly a concern for most of us in 2021. We live in a world of docker, maven, webpack, etc… Even unorthodox uses of makefiles for CI/CD steps have to contend with terraform, terragrunt, the various cloud vendor CLIs…

Everything has their own CLIs, many of them specifically created to manage dependencies. Why use Makefiles when we have these newer tools?

Human brains have swap space too

I have often compared my brain to a computer, as a way of explaining my limitations. I can run out of memory, I have a limited clock speed, and some days I just need a reboot to get back to normal.

When each of us meat-based computers have to learn a new tool or CLI, some precious space is allocated to learning its syntax and semantics. More elegantly, this is the concept of cognitive load, and we should treat it as a precious resource if we wish to become more effective engineers.

In our applications, we manage system memory and swap usage very carefully. Shall we treat ourselves too with the same care?

What this means practically, is that we should:

  • Work on the tools when needed
  • Write down what we’ve learned and turn them into script snippets
  • Add a bit of documentation for other humans (or our future selves) to remember the lessons along the way
  • And now we can safely unlearn this information and go learn something else

A wild Jenkins appears

Engineers when Continuous Integration became a thing, probably

Without opening a can of worms about whether Jenkins is good or bad, it did address the need of keeping and automating script snippets. For a while, it wonderfully orchestrated our diverse ecosystem of common and endangered tools alike.

This did however introduce a new problem, the orchestration code became further away from our development environment. This is a double-whammy: it’s harder to do a quick test/validation of code changes; and interactive troubleshooting became almost impossible, with the only feedback being log outputs.

And of course, if the CI system ever breaks, it’s scary not having a way of taking over manual controls like pilots can for a plane. (Jenkins is not allowed to have any outage, we can even make a policy for this!)

Dear Santa, these are the things I wish for Christmas

What if we could wish for anything? What would we want in our build-and-orchestration-tooling-simplifier-thing?

  • Neatly keep a bunch of script snippets, preferably bash scripts
  • Runs locally on a laptop, or any other linux environment
  • Easy to add to existing CI solutions, docker images, etc
  • Preferably has a CLI that’s easy to invoke
  • Self-documenting, allows for docs/comments co-located with script snippets
  • Easy to read, not another esoteric mini-language please!
  • Supports input variables (such as the environment name or version)
  • Has some nice things… like colours

And some things we could do with this:

  • Build something
  • Test something
  • Publish a build
  • Deploy a build
  • Run a status/monitoring script
  • Start/stop a long-running service without dealing with systemd
  • Run a local web server
  • Run some AWS commands that require a json file as input
  • Put something into a queue or a bucket
  • Restart a bunch of things

A special CLI just for you

Before we charge off to build a tailored CLI for our team/product/department/company, please consider the following humble makefile.

It’s not perfect, but it can probably join your project in a single PR and a few hours’ effort. It’ll also probably work on everyone’s machine and CI workers straight away.

$ make help

  make help                        Show this help
  make cmd_w_var MYVAR=<value>     run a command with MYVAR
  make cmd_w_tmp_file              run a command that needs a small config file (looking at you awscli), heredoc is very handy here
  make cmd_w_log                   run a script and capture output with timestamp
  make crontab                     put something into crontab, like a command with logging
  make aussie                      print straya colours

The makefile itself:


# sets makefile to use bash, rather than the default sh
# respect .bashrc, fail on error
# commands run consecutively in the same shell, variables can persist in a target

# never skip any make targets, effectively disables "change detection"

help:				## # Show this help
	@echo "Usage:"
	@sed -ne '/@sed/!s/:.*## / /p' $(MAKEFILE_LIST) \
		| sed 's/^/  make /' \
		| column -s "#" -t

cmd_w_var:			## MYVAR=<value>  # run a command with MYVAR
ifndef MYVAR
	$(error MYVAR is undefined, check `make help` to see usage)
	some_cmd --var=${MYVAR}

cmd_w_tmp_file:			## # run a command that needs a small config file (looking at you awscli), heredoc is very handy here
	export VALUE="bar"
	cat <<- EOF > /tmp/foobar.json
		"foo": [
	some_cmd --file /tmp/foobar.json

cmd_w_log:			## # run a script and capture output with timestamp
	bash 2>&1 \
		| ts '[%Y-%m-%d %H:%M:%S]' \
		>> my_script.log

crontab:				## # put something into crontab, like a command with logging
	crontab -l \
		| grep -v "make cmd_w_log" \
		| { cat; echo '*/10 * * * * bash -c "cd $(PWD) && make cmd_w_log"'; } \
		| crontab -

aussie:				## # print straya colours

The CLI docs from make help is rendered via regex to match the format <target>: ## <variables> # <description>.

It’s dangerous out there, take this!

This is by far not a perfect solution, but it is hard to pass up for its simplicity and how easy it is to begin the journey.

Some parting advice then:

  • DON’T let the makefile get too large
    • Avoid implementing complex logic inside it
    • Consider using the pattern of makefile invoking bash scripts (or other scripts)
  • DO try to turn CI workflow into single make <xxx> steps
    • It’s so much easier to try out or migrate to a new CI this way!
  • Makefile is very similar to bash, but not exactly the same
    • Use $$ to escape $, if you are setting and using a variable in bash (see make cmd_w_tmp_file)
    • Makefiles require tabs instead of spaces, sorry…
    • $(...) and ${...} are synonymous in Makefile.
    • I prefer using $(...) for things specific to make, e.g. $(error ...), $(MAKECMDGOALS)
    • I prefer using ${...} for everything that would still make sense in bash
  • Avoid using sudo, if you can
  • Be careful with secrets.
    • Don’t keep these in plaintext, and definitely not in git
    • Fetch them dynamically from local file / environment variable / remote credentials store
    • Use @ before the command to suppress printing the command, if it contains sensitive credentials
  • This is not the ultimate solution, maybe fully remote CI execution or a custom CLI IS the right solution (but they usually probably aren’t)