What is the difference between a compiler and interpreter?

What is the difference between a compiler and interpreter?

In the last chapter, I mentioned something called a compiler. I had to do that to explain an advantage of statically typed programming languages. In this chapter, we’ll look at what a compiler is, what an interpreter is, as well as what the difference between the two is.

How does a computer understand code?

The first thing we need to learn is how the computer understands what we programmed. As you’ve seen, code is written in a way that is relatively easy for humans to read. For a computer, it’s a different story.

Our brains can do very complex cognitive tasks, to a level that we aren’t even able to comprehend. On the other hand, a computer’s processor is very basic.

It can only do simple operations like arithmetic, number comparisons, fetching data and writing data.

So you might think: how can computers do amazing things like display graphics, animate an AI in a computer game, take decisions, and so on?

The answer is simple. Every single amazing thing the computer does is translated back to simple arithmetic and comparisons. This means that every complicated software that we program gets translated back to those kinds of simple operations that the computer can understand.

Oh, and by the way, the computer only understands 0s and 1s. What do I mean?

The binary system

We can represent any number in the binary system. We call the system that we use every day the decimal system.

Let’s look at a couple examples of how we can transform a number from the decimal system to the binary system:

132 is 10000100 in binary
14.87 is 1110.11011110101110000101 in binary
8723465 is 100001010001110000001001 in binary

The processor inside your computer understands this system. It is able to perform those arithmetic and comparison operations with binary numbers.

Okay, so now that we can transform the numbers in our code to binary numbers, what do we do with everything else in our code?

Well, we also translate every other element in our code into some number associated with it (in binary, of course).

In the end, everything stored in our storage device and memory, as well as every data processed in the CPU, is nothing more than big chunks of 0s and 1s.

Oh, and each 0 or 1 is called a bit. If you put 8 of them next to one another, it’s called a byte. Sounds familiar?

How do we convert our code to binary?

Our code, also called source code, needs to be fed to some sort of program that transforms it to binary, also called machine code.

We have two families of such programs: compilers and interpreters.


Usually, we use compilers for statically typed languages. These are languages that require you to specify the type of each variable that you use.

As we know, specifying the types has an advantage. It allows us to catch any bug related to variable types automatically. This is done by the compiler when we compile our program.

So, whenever we want to run some code that we’ve written in, say, C++, we need to feed that source code to a C++ compiler. That compiler generates the machine code (a file full of 0s and 1s that our computer understands). Then, we can simply execute that file and it will run our program.

On top of checking for type issues in our code, the compiler does so much more. It looks for a great amount of other potential issues and it also optimizes our code to run faster on our processor.


For dynamically typed languages, we usually use interpreters. Remember, such languages do not require us to specify the types of our variables because they deduce them automatically.

However, they deduce the types while running the code, not before like the compiler did (it deduced them in the compilation step from the types we assigned to our variables).

You might think, so what? At least it still tells us when we have an error.

That’s true, but imagine you have a very big software, and the error only happens when you go in specific menu and do a specific thing. It gets pretty hard to test, right?

Now, an interpreter usually takes your source code and runs it one line at the time, until it either finishes running your code, or there’s an error.

Usually, interpreters for languages like Python or Ruby allow you to open it in an “interactive” mode that allows you to write one line of code at a time and runs it immediately. This is usually useful when you want to test a line of code quickly. You never really use that for programming something.

Okay, but it seems complicated

It’s not, it’s beautiful! You write some code in your favourite programming language, you go on any computer (any OS), you download the compiler or interpreter for the programming language, and you run your code using that. That’s it!

What you need to take out of this is that every single data and code you have on your computer is just 0s and 1s. That’s why we need a way to transform our source code to that.

As well, the main difference between a compiler and an interpreter is that, since we have some languages that require us to specify types, we can do some verifications before even running our code to make sure it has less bugs!

Next: Chapter 6 – How to actually learn programming?

If you want to start from the beginning of the series, go to chapter 0.

By Radu

Software Engineer and Computational Scientist graduated from the Technical University of Munich. Worked at Shopify, DRW and Ubisoft.

Leave a comment

Your email address will not be published.

Join my mailing list to get an email whenever a new chapter comes out!