Max Polun's blog

RISC vs CISC doesn't matter for x86 vs ARM

If you’ve been following tech lately, you’ve probably heard people talking about the competition between x86 chips (mainly from Intel), and arm chips. Right they’re used mostly for different things — phones and tablets have arm chips, desktops, laptops, and servers have x86 chips — but Intel’s trying to get into phones and arm vendors want to get into servers. This promises lead to some exiciting competition, and we’re already reaping the power benefits of Intel working on this in desktops and laptops. However whenever this comes up, people bring up that arm is RISC and x86 is CISC, presenting RISC like it’s a pure advantage and that x86 must be crippled because it’s CISC. This doesn’t matter and hasn’t for a long time now, but let me explin why.

RISC means Reduced Instruction Set Computing, and it really comes out of the 80s, and it describes a certain style of instruction set (ISA) for a computer. The instruction set are all the low-level commands the CPU supports, so it might have things like “load this value from memory”, or “add these two numbers together”. The ISA doesn’t say how those commands have to be implemented though. Despite the name, the one thing that really seperates RISC from other types of instruction set is not the number of different instructions, but that most instructions to only one thing — they don’t have different addressing modes. On traditional architectures you’d have instructions that do the same thing, but can work on different types of operands. For example you might be able to add 2 registers together, or add memory and a register, or add memory to memory. This could become extremely complex, and arguably reached the height of it’s complexity in the VAX ISA. The VAX was very nice to write assembly code in, but the vast majority of those addressing modes weren’t needed when you use a language like C, and the compiler is responsible for making sure you load data when you need to.

The big argument that RISC proponents made was that you could cut out many of these addressing modes, and focus on making your basic operations fast, resulting in a faster overall chip. Since most modes in something like the VAX were rarely used they were usually microcoded and slow, so you had to know which modes were fast anywaysdefeating a lot of the point of having so many complex modes. RISC proponents dubbed traditional ISAs as CISC (Complex Instruction Set Computing), it’s not a term that anyone would use for their own work. RISC was very successful in the 80s — ARM started then, DEC (the makers of the VAX) made the alpha, Sun made the SPARC, and even IBM got into the action with POWER. However this was mostly in “big” chips (ARM being the big exception), the other story of the 80s was the growth of the micros — tiny chips that were cheap enough for individuals to buy started coming out in the 70s, and by the 80s there were lots of computer using them: think of IBM PCs (using x86), Commodore 64s (using the 6510, a varient of the 6502 which was used in the Apple II and NES as well), the original Apple Macintosh, and the Amiga (the mac and amiga both used the motorola 68k family). All of these were using what we’d consider CISC chips — they had various addressing modes. Nothing crazy like the VAX, but it was always the outlyer in ISA complexity. All of these ISAs still exist, but most are only used in tiny embedded chips (other than x86). Of those computer ecosystems, the PC took over the world, and the mac survived, but still is a small portion of the computer market (and uses x86 anyways these days after using a RISC chip for a while).

So with that story set, why doesn’t RISC and CISC matter anymore? Well there are 2 big reasons 2 things: out of order execution (ooo), and the fact that an ISA doesn’t specify how a chip is implemented. Out of order execution was the end result of a lot of things people were trying to do with RISC chips in the 80s — each instuction basically executes asynchronously and the CPU only waits for the results of an instruction if it’s being used by somethign else. This makes the ISA matter a lot less because it doesn’t really matter if you load data and use it in one instruction or 2. As a matter of fact, since the late 90s Intel has been internally splitting it’s CISC instructions into RISC-like micro-ops, which points out how the whole RISC vs CISC thing is pointless these days.

That doesn’t mean that ISA doesn’t matter, but the devil is really in the details now. x86 is honestly a bit of a mess these days, and decoding it is more complex than decoding ARM instructions (or really any other extant ISA). ARM also just updated it’s ISA for 64 bits, and from what I’ve heard it sounds like they did a really good job, basically making the totally generic RISC ISA with no weird stuff that makes it hard to use. X86 was never even close to the complexity of somethign like the VAX, so avoided a lot of it’s problems. RISC chips are also not without strange things that hurt them down the line — they often exposed internal details of their early implementations, which they had to emulate in later faster version. So if you want to compare the x86 and arm ISAs, that’s actually an important and interesting comparison to make, but the acronyms RISC and CISC don’t actually add anything.