And the microprocessor?

I want to know exactly how this all works.

Wikipedia claims that an instruction set architecture determines the data, registers, addressing modes, memory, etc.

It also claims that it dictates the opcodes on a microprocessor.

Wikipedia says "Microarchitecture" is the way a given ISA is implemented on the chip itself.

But if the ISA dictates the specifications of the chip implemented on the micro side, what is the ISA? And how does it differ than the microarchitecture on the chip exactly?

This is unclear to me.

Like, let's say I want to develop for the chip in Assembly. Do I locate the ISAs for the microprocessor, or no?

Some experts told me the microarchitecture is the main selling point when developing in low level for the chip, but different ISAs can work on different chips?

I'm kind of confused....

Edited by happygeek: moved

4 Years
Discussion Span
Last Post by mike_2000_17

Maybe this could be a start, but I guess you are better of asking again in the computer science department of daniweb.

Edited by ddanbe: addition


My memories of microcode are vague but machine code (what we program in when we write Assembler code) does things like

load a register from memory
perform a simple operation
save the register back to memory

whereas microcode actually concerns itself with things like

put this value on the data bus


put this address on the address bus

in other words, microcode is essentially the interpreter that runs the program that was written in assembler. Each assembler code is an abstraction for several more basic operations that take place at the board and circuit level. In the same way that

x = sqrt(a**2 + b**2 + c**2)

gets broken down into many assembler instructions, the assembler instruction

LD    R1,X

gets broken down into several microcode instructions

Edited by Reverend Jim


As Jim said, you must see the CPU as an interpreter of machine instructions. It takes in instructions and executes the required actions. In order to have stability over time, it is most practical to have a standard set of instructions that all CPUs (or a class of CPUs) can be made to understand / respond to. Those standard sets of instructions are called Instruction Set Architectures, or usually just "instruction sets". Familiar instruction sets include x86, x86-64, ARM, AVR, PowerPC, and many extensions like SIMD (SSE, SSE2, SSE3..). And many instruction sets are compatible (mostly backward compatible) to provide more stability. Most PC CPUs today will implement either x86 (32bit) or x86-64 (64bit), plus many SIMD extensions. Then, GPUs might support even more SIMD-like extensions (btw, SIMD is for doing many floating-point operations in parallel, which is very useful for 3d graphics). Most other less common instruction sets are for embedded devices or very big computers (servers or super-computers), due to the special needs these environments have.

The instruction set is essentially the language you need for talking to a processor. It is a very simple language, as in, every individual instruction is kind of trivial. It goes a bit like this:

Take the number 2
Take the number 3
Add them together
Give me that number
Now take the number 4
Multiply them together
Keep the result
Multiply again
Give me that number

which would probably execute roughly this code:

int c = 2 + 3;
int e = 4 * 3 * 3; 

The instruction set itself is just the set of actions that you can request the processor to do, and what parameters (registers) these instructions will operate on. Of course, at the end, individual instructions and register identifiers are in binary code (or opcode). This is what an assembler does, it takes assembly listings and turns them into binary instruction code for a particular instruction set architecture (ISA), and that final binary code (opcode) is what ends up in an executable file. Assembly listing for the above example might look something like this:

MOV EAX, 2        // 'move' the value 2 into register EAX
MOV EBX, 3        // 'move' the value 3 into register EAX
ADD EAX, EBX      // add the value of EBX to EAX, and store result in EAX
MOV [ECX], EAX    // 'move' the value in EAX to the memory pointed to by ECX
MOV EAX, 4        // 'move' the value 4 into EAX
MUL EAX, EBX      // multiply EAX with EBX, store result in EAX
MUL EAX, EBX      // multiply EAX with EBX, store result in EAX (again)
MOV [EDX], EAX    // 'move' the value in EAX to the memory pointed to by EDX

Mapping the above code directly into binary opcodes gives you bits (0s and 1s) that you can just feed to the processor and get the results. And, at that point, the processor is a black-box, i.e., instructions go in, results come out (or show up in memory). That's as deep as any programmer would ever really need to go.

Now, if you do want to get one step deeper down the rabbit hole, you could ask the question: how does the processor figure out how to interpret and accomplish the tasks that those instructions demand? And the answer is in what you would call the "microarchitecture". In this realm, it's really anybody's guess, by that I mean, unless you're an engineer working for a company that designs and builds processors, in which case, you know what's in the black-box, but chances are, you can't talk about it much (NDA/IP). In simpler, older hardware, the micro-architecture was mostly hardware (e.g., transistor circuits that interpret the instructions directly by the bits to route the register multiplexer channels to the correct operation modules on the processor), and some simple programmable logic to drive / handle auxiliary things (system bus, interrupts, etc.). I think that modern CPUs and GPUs have much more fancy things going on inside that "black-box". Mostly since CPUs have stopped increasing in clock-speed (number of instructions per second), but yet continue to improve and also have more levels of cache (which means very fancy pre-fetching micro-architectures).

So, long story short, to use an analogy, if you think of the computer as the processor, than the mouse and keyboard is like the ISA, while the operating system running the computer is like the micro-architecture. The first is about how you get it to do what you want it to do, the second is about how it actually gets the job done. The first is standardized and widely adopted by necessity for a stable and familiar environment across the board, the second is open to whoever can come up with the better way to get things done (e.g., Intel vs. AMD).

Votes + Comments
Much better explanation.
This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.