Replacing an IDE With Makefiles

By | 2012-09-05

Developers have been led down a path by graphical tools. The tools in question are the so-called IDEs (integrated development environments). They are a nice idea in theory: combine your editor, compiler and debugger into one graphical program so that the developer can edit away, pressing “build” when they feel like it and single step through the running program for debugging. I can’t deny that there is a certain appeal to it, and for getting you started in programming, they can be very useful.

The problem with them is that they make you stupid. A developer in my office wouldn’t use a free compiler I suggested because it didn’t have an IDE to go with it. Instead, he’d rather pay the hundreds of pounds per version charge for a compiler that comes with a graphical interface. Not me; and hopefully not you by the end of this article. The correct way (I think) is to keep each of your tools separate, this is the programmers way anyway – modular thinking. The key advantage is that you can then swap out particular components for different jobs without having to relearn everything from scratch.

UNIX is designed using this same philosophy – small, interchangeable, easily connected tools that do one job and do it well. Everything I write is going to be about working on a Linux platform. There are ways of getting a similar environment on Windows (MSYS and Cygwin), but I’ll talk about them another day.

Here then are my chosen tools:

  • Plain text editor: vim. Many people don’t like vim, that’s fine. The beauty of a modular system is that you can use whatever suits you; SciTE and notepad++ are nice on Windows, and kate, emacs, vim, nedit, gedit, and many more are all good on UNIX. vim is available for Windows and UNIX so I get the same editor whatever I’m working on.
  • Build system: make. make is the glue that holds the whole show together, it’s essentially a program that runs lots of different snippets of shell script from a file you supply. That description doesn’t really do it justice, but hopefully you’ll understand more later.
  • Command line: bash. We want some way of starting the compilation or running the program. In an IDE, there’s a button to push. We’ll be using a terminal with a command line running on it.
  • Compiler: whatever. Depends on the job, we’ll keep things simple here and build a native program, so we’ll stick to gcc. There is an advantage to choosing gcc as well in that it is widely cross-platform, there is a gcc compiler or cross-compiler for almost every system and CPU. The Windows version is called mingw.

Let’s begin then… open up a terminal (possibly labelled xterm) and get yourself a command line to start issuing commands. I’ll mainly be showing command line examples. I’ll be writing them like this:

# Comment about what name_of_command will do
$ name_of_command command_line_parameters
Sample output of command.

You would type the bit after the “$" and you’d expect to see something like the sample output when you do; I’ll write comments prefixed with a "#", you shouldn’t enter these nor expect to see them in the output — I’ve added them only as explanation. You don’t type the "$”, that’s just there to show that this is a command prompt, and “$” is commonly the default character used already on your system.

We’ll want some tools to begin, so let’s install them. On Debian:

$ sudo apt-get install g++ make manpages-dev

Or use whatever package management system you’re comfortable with to get these programs. I’m assuming that you already have a text editor that you’re comfortable with. If not, now is the time to find one and get comfortable with it.

Let’s begin with the classic, “Hello, World”.

# Change to our home directory, and make an empty directory
# to work in
$ cd
$ mkdir -p projects/hello
$ cd project/hello
# Start the editor of your choice...
$ vim hello.cc

Then enter the source for the C++ program:

#include <iostream>
using namespace std;

int main()
{
    cout << "Hello, World!" << endl;
    return 0;
}

Now we’ll compile that manually.

$ g++ -o hello hello.cc
$ ./hello
Hello, World!

Pretty simple? The C++ compiler, g++ would default to outputting a program called a.out if we didn’t override that with -o hello; and of course the hello.cc parameter simply tells it what source file to compile.

This is okay, but the compiler command line is already tediously long to type, and it’s near the shortest compiler command line we could have. Let’s write a Makefile to take care of calling the compiler, and remembering the command line options we want for us. Put the following in the same directory with the name Makefile (the initial capital is important).

run: hello
    ./hello

hello: hello.cc
    g++ -o hello hello.cc

A Makefile is a list of so-called recipes. The form of a recipe is like this:

target: dependency
    commands_to_make_target_from_dependency

The indent before the command must be a real tab. Spaces won’t work. So, you can see that our hello Makefile tells make that to create the file hello from the file hello.cc it should run g++ -o hello hello.cc. For convenience, I’ve also included a run recipe. This tells make that to create run it needs a file called hello, and it should run ./hello. The clever feature of make is that if, when it comes to execute the run recipe, it can’t find a file called hello, it will look through it’s other recipes and see if it has been told how to create it. We can check that like this:

$ rm -f hello
$ make run
g++ -o hello hello.cc
./hello
Hello, World!

You can see that make first ran the compiler (it echoes the commands it’s running to the screen), and then runs the program.

Let’s improve the Makefile a little.

run: hello
    ./$<
clean:
    -rm -f hello

hello: hello.cc
    g++ -o $@ $<

The compiler recipe has changed to g++ -o $@ $<, this looks a little like a cat has walked on the keyboard, but really it is making use of make’s variable facility. Variables in make are prefixed with a “$”.

  • the “$@” variable means the name given as the target of this recipe.
  • the “$<” variable means the first dependency in the given list.

In short, it means we don’t have to write hello and hello.cc again – the fact that we named them in the recipe declaration means that make already knows what they are. Why do we want to do this? We’ll see more later, but for now just notice that if we renamed hello.cc to helloworld.cc we would only have one place to change in the Makefile.

We’ve also used this facility to make the run recipe run whatever dependency it is given, in our case hello. Again, we’ve reduced the number of places that we have to state a name.

There is now also a clean recipe, which runs the deletion command we showed earlier. This means we don’t have to worry about mistyping when we’re deleting and leaves us with an easy way to go back to a directory with no compiler output in it.

There’s more we can do yet to improve our Makefile, the compile recipe can be changed to:

%: %.cc
    g++ -o $@ $<

Let’s see what happens when we run it:

$ make clean
rm -f hello
$ make run
g++ -o hello hello.cc
./hello
Hello, World!

What’s this? Exactly the same as before. We changed our compiler recipe definition from “hello: hello.cc” to “%: %.cc”. This is a pattern recipe. We’re telling make that it can create a file called XYZ by compiling XYZ.cc, and that this works for any XYZ. We represent that “any” with the % symbol. Now, when we run make, it decides that to create the run target, it needs the file hello, but there is no hello file, it checks its recipes and finds that it knows how to create hello if it has hello.cc (“%: %.cc”). It does have hello.cc available, so it runs that pattern recipe, expanding the “%” symbol appropriately. We see now the additional advantage to our use of “$@” and “$<” for specifying the files to the compiler – they take on the values of the target and dependency after the expansion of “%”. So, make turns “%: %.cc” into “hello: hello.cc”, and then uses that to turn “g++ -o $@ $<” into “g++ -o hello hello.cc”; which is exactly what it runs.

There is more to be done yet. Let’s get the names of things written as few times as we can, and lets split the compile command into more flexible separate compile and link stages.

NAME := hello

run: $(NAME)
    ./$<
clean:
    -rm -f $(NAME)
    -rm -f $(NAME).o

%.o: %.cc
    g++ -c $<
%: %.o
    g++ -o $@ $<

What we’ve done here is make a user variable called NAME to hold the word hello. User variables are just like the $@ and $< we saw earlier, but they are not assigned automatically, we must assign them something. The first line is that assignment. Then, whenever we need to use hello, we write $(NAME) instead and rely on make to substitute hello in when it is running. In this way we’ve removed all but one of the explicit uses of a filename. You can imagine that should we want to start a new project, it would be very easy to copy this Makefile and change the NAME at the top as appropriate.

We can also use variables to hold settings for the whole file – for example switches to the compiler and linker. By keeping all the settings in one place near the top, we make it easier to customise the build without getting bogged down in the actual command lines.

This Makefile is still a little limited because it only compiles one source file. When a project gets larger, we want to be able to compile multiple source files into their respective object files, then link all into one final executable. Let’s alter our Makefile to do that automatically for a directory filled with source files.

TARGET := hello

SOURCE := $(wildcard *.cc)
OBJS   := $(SOURCE:.cc=.o)

run: $(TARGET)
    ./$<
clean:
    -rm -f $(TARGET)
    -rm -f $(OBJS)

$(TARGET): $(OBJS)
    g++ -o $@ $^
%.o: %.cc
    g++ -c $<

There are two key features of make we’ve used here:

  • Function expansions. $(wildcard) is a special variable which, rather than expand to a value that was set earlier, instead performs an operation to generate the value. For $(wildcard *.cc) it looks in the current directory for all the files that end with “.cc”. That list is then written to the user variable, SOURCE.
  • Pattern substitution. We know that $(SOURCE) would be replaced with the value of the SOURCE variable. However, make gives an additional syntax which lets us run a pattern substitution on the replacement. That mode is activated by ending the variable name with a colon, “:”. For example, $(SOURCE:.cc=.o) looks up the variable SOURCE, and then converts every instance of “.cc” within it into “.o”. (See also the patsubst function in the make documentation)

Then we use features we’ve already seen. We say that to make $(TARGET) requires all the files listed in $(OBJS), for each of which we already have a pattern recipe to make the “.o” from the “.cc” source.

One final change. We’ve got recipes for “run” and “clean”, and make treats these as filenames. Of course these files will never be made as the recipes we supply for these targets don’t create any files. That means we can run “make run” as many times as we wish – it will always run. However, if there were a file called “run” in the directory, the recipe would never run since the target would already exist. To avoid this sort of thing biting us in the future, we can tell make that these aren’t to be treated as filenames, that it should always run them. While we’re doing that, we’ll add a separate “build” recipe, making it optional to run the final executable.

TARGET := hello
CCFLAGS := -O2 -g -Wall
LDFLAGS := -static

SOURCE := $(wildcard *.cc)
OBJS   := $(SOURCE:.cc=.o)

default: build
build: $(TARGET)
run: build
    ./$<
clean:
    -rm -f $(TARGET)
    -rm -f $(OBJS)

$(TARGET): $(OBJS)
    g++ $(LDFLAGS) -o $@ $^
%.o: %.cc
    g++ $(CCFLAGS) -c $<

.PHONY: default build run clean

We’ve only touched the surface of make here, but this is a good start, and enough of an example that it would work in real projects.

More importantly though is to compare this with what an IDE would have provided us. The working directory would be full of unreadable, non-standard configuration files, with no guarantee that an upgrade of the IDE would leave them operational. These would represent the options set in the GUI. We would have to remember which compile options we changed from the default, with no easy way to see which those were and no guarantee that they don’t change with an upgrade. The options would have been set in page after page of dialog boxes, with generally poor organisation; mixing commonly-used settings with esoteric and complicated settings that we wouldn’t ever have call to change. Our Makefile on the other hand records our settings for the build like this:

TARGET := hello
CCFLAGS := -O2 -g -Wall
LDFLAGS := -static

The use of functional expansions in the Makefile means that should we add more source files to the project, we won’t have to change the Makefile at all. The fact that we can add utility recipes (like “clean”) means that we can tailor our Makefile to do useful things for the specific project we’re working on. The Makefile is simple, when something goes wrong we can easily find out which part was responsible for the broken operation, and can easily make a change to fix it. In an IDE, we have no idea exactly what it did when we said “build” and we have no idea which dialog box to go to to change when something goes wrong.

The price for all this is that it’s slightly harder to get started with make than it is with an IDE, there is no plug-and-play facility with make and a command-line. You need to be given help to get you started – the above was your help.

As with almost everything in UNIX, it’s harder to learn but easier to use.

Leave a Reply