Things I Learned While Building a Computer

When I say “building”, I don’t mean physically. I did not buy parts, have them shipped to my house, assemble them into some kind of housing, and install an operating system. I started with simulated circuitry — NAND gates — and assembled them into more complex parts until I had a freaking computer.

I completed From NAND to Tetris: Building a Modern Computer from First Principles, Part I.

Here are a few things I learned:

  • All of the operations a computer performs are built from the fundamental three fundamental operations in Boolean algebra: AND, OR and NOT. They are fundamental because none of them can be implemented in terms of the other two.
  • NAND, or NOT-AND, is not a fundamental operation in Boolean algebra, but it is universal. That is, you can implement AND, OR and NOT entirely from NAND gates. Modern computers are built entirely from varying arrangements of NAND gates.
  • Computers are fundamentally machines that execute a sequence of instructions, one at a time. There is no notion of time in the Boolean algebra I’ve described so far, so how does this work? That’s where sequential logic comes in. The other kind of fundamental circuit in a computer is one that can operate not only on its own input, but also its previous input.
  • The Arithmetic & Logic Unit (ALU) really is the most important part of a computer. It defines everything the computer can, well, compute. The ALU I built is very simple, and can only add, subtract, and perform bitwise AND, OR and NOT operations on its inputs. Things it cannot do include multiplication, division and bit-shifting.
  • Building an assembler in Swift was incredibly fun and satisyfing.
  • Computers are complicated.1

Here are a few things I did not learn:

  • How to build the timed circuitry in a computer. We assumed that a clock exists that defines discrete time, and did not try to build one out of electrical components. We did not manually connect our clocked chips to our clock, we just assumed the existence of a fundamental clocked chip: the D Flip-Flop.
  • How to implement things that appear to exist in high level programming environments, such as a stack that can keep track of state while you jump to another function and then return later.
  • How to build a CPU cache, where it doesn’t have to go all the way to main memory for data it’s currently working on.
  • How to build multiprocessing functionality: multiple CPU cores that can operate independently and coordinate with one another. How the hell do you teach independent circuits to coordinate with one another?

  1. I used to think I understood this. Now that I’ve had to assemble all the bits myself, I know why. ↩︎

Leave a comment