MIPS dying is really the end of an era. I think it was the quintessential RISC processor of the early 90's. IIRC, Windows NT was developed on a MIPS machine to ensure that inadvertent x86 dependencies did not sneak into the code.
I remember reading a PCWorld article comparing various processors. This was the time the Pentium came out. If my memory servers, the MIPS won the overall performance crown vs Pentium, PowerPC, and Alpha AXP.
Ah! The fabled Intel i860. I remember the massive amount of hype at its debut. It was touted as "Cray on a chip".
I got to play with an Intel iPSC supercomputer in our Physics lab (at university) that used the i860. We had a Sun box that hosted cross-compilers for it and allowed you to reserve a specific number of i860 processors for you. The number had to be a power of 2. Fun times!
MIPS still has a very large presence in the education sector as well - MIPS asm is very easy to learn because of how consistent it's syntax is and the plethora of open source debuggers as well.
Learning MIPS was what originally got me interested in ASM programming since we had a class that was focused on MIPS code and another class that had us build a digital MIPS processor from scratch. The combination of these two classes really sold me on the magic of super low-level programming.
For those who want to take a look, "Computer Organization and Design" by Hennessy and Patterson is a very common textbook on the subject (at least here in Italy). I found reading it to be a very, very instructive experience.
Its version of 32-bit MIPS is so simple, its whole instruction set fit in a 2-side cheatsheet (the famous "green sheet"). The design of the CPU is quite easy too. Given an instruction and its binary representation, it is almost straightforward to see how each bit contributes to the computation (setting the correct ALU operation, retrieving a value from the correct register, etc.).
MIPS is typically seen in computer architecture class because it is simple and regular (at least the original version seen in class). However, for Assembly programming, MIPS, like (almost?) all RISC instructions sets, is tedious. Give me any CISC with a generous range of addressing modes, and I take any day over MIPS/RISC.
We can make a parallel between those low-level ISA and high-level languages: a language like Lisp is lean and simple so it is taught and presented as good design (and people who went through that education keep that in memory), but when it comes to produce real program almost everybody chooses a much less regular language, which is way more practical. (Same could be said for stack-based languages like Forth, which present an extremely simple model to apprehend, but that doesn't mean at all that it is simple to program in.)
Or postfix vs infix for mathematical expressions/calculations. Same principle: the one which is based on a very simple model is praised by aesthetes, but almost everybody prefers the other one, which is simpler to use because it is more natural, despite being based on a more complex model.
In fact, the simplicity of the model is not of much interest for the user, it just makes the life of the implementer easier. But for 1 implementer, there are thousands or millions of users, who want ease of use, not ease of implementation.
Traditional addressing modes have been largely abandoned because modern architectures are based on the load-store principle. Simplicity has little to do with it, and referring to that whole shift in design as "complex" vs. "simple" instruction sets is a bit of a misnomer. Besides, well-designed architectures are not exactly lacking in ease-of-use.
First, mod-r/m addressing on x86 is fairly traditional and can often save considerable calculation over a "simpler" addressing mode (given the opportunities for add-and-scale operations).
Second, treating x86 machines as load/store architectures passes up the opportunity to achieve improved code density and increased execution bandwidth from "microfusion" - this is when a operation (e.g. "add") is done with a memory operand. Microfusion, for those not familiar with it, allows two "micro-ops" (aka uops) that originate from the same instruction to be "fused" - that is, issued and retired together (even though they are executed separately).
This can occasionally - in code that has already been militantly tuned to an inch of its life - yield speedups, as Skylake and similar can only issue and retire 4 uops per cycle. However, there are 8 execution ports (of which only 4 do traditional 'computation'). Carefully designed code can take advantage of the fact that issue/retire are in the "fused domain" while execute is "unfused domain" - so you can sometimes get 4 computations and 1 load per cycle even on a 4-issue machine.
I was trained on MIPS and Alpha, so of course old habits die hard, and it's always tempting to go old school and design everything to act as if the underlying machine is a load-store architecture. However, this (a) isn't necessary on x86 and (b) often won't be faster.
The other blow against load-store is that a modern o-o-o architecture can hoist the load and separate it from the use anyway - and it doesn't have to consume a named register to do it (it will use a physical register, of course, but x86 has way more physical registers than it has names for registers). This of course is a bigger deal for the rather impoverished register count of x86 so it is, in the words of a former Intel colleague on a different topic, a "cure for a self-inflicted injury".
Well I don't think most people these days are interacting with assembly by hand-writing programs. The real users of ISAs now are compiler authors, and the simpler and regular languages seem better for them. So is there some other reason (other than inertia) that RISC isn't practical?
The ubiquitous delay slots in MIPS are one instruction-set feature that has aged really badly. RISC-V actively got rid of it in their design because it ends up being such a hindrance to, e.g. out-of-order implementations.
It's also a hindrance to in-order implementations that have a different number of branch delay cycles (e.g. different number of pipeline stages or instructions taking a variable number of cycles) than the original implementation.
Branch delay slots were a somewhat clever solution to reduce the complexity of the original implementation, but they baked implementation details into the ISA and became problematic when the implementation details changed.
> they baked implementation details into the ISA and became problematic when the implementation details changed.
Same reason why stuff like VLIW has failed to catch on. These things are so dependent on specific hardware implementation details that one can hardly call them general-purpose ISA's anymore.
No modern GPUs use VLIW. Ati/AMD switched from VILW to RISC-SIMD 8-9 years ago, NVIDIA a few years before that. Mobile phone GPUs gave up VILW for RISC too in the last 5 years or so.
Analog Devices provides a C/C++ compiler and a RTOS for SHARC, so I wouldn't be surprised if nethack could be compiled for the SHARC VLIW architecture (and its two branch delay slots).
well I stand corrected, it looks like there's even a fopen and fprintf in there so that would make a lot of things possible. I wonder about the performance for branch-heavy, non-vector-math computations on these CPUs.
I have a crappy nas that is running a MIPS cpu. Basically the only thing I can run on it is Debian because every other distro seems to have dropped support.
Except that there aren't really any practical (affordable!) RISC-V based Linux boards/modules.
The Onion Omega2S is based on a MediaTek MT7688 (MIPS32LE) and since the demise of the CHIP-Pro, is really the only inexpensive surface mount Linux SoM left.
WAVE computing used to be working on CGRA [1] chip as deep learning accelerators. I got in touch with a few folks there. They apparently failed that endeavor and pivoted to MIPS.
I don't think MIPS is going to completely disappear as long as Linux works on it. Someone somewhere in some country
will keep making them as long as the licensing situation is more favorable for whatever use than ARM or x86. If the owning company is going bankrupt that means making MIPS CPUs will be cheaper than ARM or x86.
In China they have lots of new MIPS developments based on existing MIPS architecture.
The question it's not if someone will use some old MIPS ISA in a few years from now on, the question is if someone will improve the ISA from now on, in the same way x86 and ARM are consistently being improved.
Some companies are still fabbing old Z80 CPUs, but that's not to say Z80 has a bright future.
Ingenic an Loongson both have architecture licenses and so far have kept releasing new chips with their own cores on a regular basis, including (in Loongson's case) some interesting enhancements. Both are also members in the RISC-V foundation already though, so it seems likely they would in the long run pivot their instruction sets to that, like others have done before them: Andes, C-Sky, Cortus, Cobham-Gaisler, NVIDIA, and presumably many more all keep supporting old products based on their previous designs while doing new development on RISC-V.
CIP-United still promises to provide enhanced versions of the both the architecture and the MIPS Warrior cores for the Chinese market, regardless of what happens to MIPS Technologies. This may seem utterly futile now, but it is also the very thing that the US Committee on Foreign Investment was trying to prevent when it required MIPS to be spun out of Imagination Technologies when that got sold to Chinese investors.
Hardly. It means the IP will be sold off to someone else, who might just make it more expensive to milk some profit from the deal while pushing customers to their own platform.
The creator of ARM speaks well of RISC-V and considers ARM yesterday's old news and has moved on. Comparing MIPS to ARM in essentials of architecture is MIPS more beautiful? ARM I understand is a mess that has evolved for its niche.
I find the latest architecture versions are all remarkably similar as they have all adapted to the same environment:
The old 32-bit Arm (now called Aarch32) was quite different and only somewhat RISC-like. Arm's Aarch64 however is mostly derived from MIPS64 with a lot of modernization plus some parts (exception levels) from 32-bit Arm.
MIPSr6 was an attempt of modernizing MIPSr5 by removing all the ugly bits (delay slots!) but the incompatible instruction encoding prevented it from being widely adopted. You cannot buy a single MIPSr6 machine that a mainline Linux runs on.
RISC-V's design looked at all RISC architectures (Berkely RISC, MIPS, SPARC, Power, Arm, ...) for inspiration and took the best parts of each. Leaving out all the historic baggage means it's simpler (the manual is a fraction of the size), but most of the important decisions are the same as in MIPSr6 and Armv8/Aarch64.
One notable difference is the handling of compressed (16-bit) instructions: ARMv8/Aarch64 doesn't have them at all (like RISC-I/RISC-II, ARMv3 and MIPS-V), MIPSr6/microMIPS needs to switch between formats (like ARMv4T through ARMv6) and in RISC-V they are optional but can be freely mixed (somewhat like ARMv7 and nanoMIPS).
It's disappointing that RISC-V designers swallowed myths that resulted in unpleasant ISA details.
For example, the notion that condition codes interfere with OoO execution has been repudiated; Power and x86 both now rename condition registers. Lack of popcount and rotate in the base instruction set are glaring omissions. (That x86 got popcount late, and that the bitmanip extension will have them if it ever gets ratified, are no excuse.) It was silly to make the compare instruction generate a 1 instead of the overwhelmingly more useful ~0.
We only get a new ISA once in a generation. It is tragic when it is wrong.
It is possible, in principle, that popcount and rotate could be added to the base 16-bit instructions, but I'm not holding my breath.
My experience is that MIPS and ARM licensing are equally bad.
As an anecdote, I have personally switched a MIPS core to an ARM core in an SoC revision specifically because ARM gave us better licensing terms than MIPS. It was a pain because MIPS big endian and ARM big endian are not directly compatible with each other.
Maybe it has changed, or maybe it depends on what level of IP you're licensing. The company I worked at in 2006-2009 had gone with MIPS because they were incorporating that with all sorts of other logic (from third parties plus custom) and the cost for that type of license from Arm would have bankrupted them instantly.
I remember listening to some guy from ImgTec brag about how MIPS is the 4th biggest instruction set in the world as if MIPS had like 20% market share rather 1% or whetever ridiculous market share ImgTec had bought their way into. That was literally at the peak when ImgTec were on top of the world, they'd just built a new campus and were spending like there was no tomorrow. Then Apple drank their milkshake, and they had to sell everything. Even then though, it was really bizarre that anyone was still trying to make MIPs a thing.
MIPS has been the walking dead for a couple of decades now. It's really time to let go. Especially since there are actually interesting things happening in the ISA space with RISC-V.
Pardon my ignorance, but... is RISC-V all that much more alive than MIPS? My impression is that MIPS has a past, RISC-V may have a future, but neither has much of a present.
For desktop or server class stuff, RISC-V is still shaking out but there are attempts to make it happen on-going right now. For everything else you've got companies like Western Digital that have committed to shipping a billion[2] RISC-V processors for their own devices, and many others. [1]
The only hardware actually currently shipping in meaningful quantities are a handful of microcontrollers. The U540 is pretty much a Raspberry Pi level of SoC but it's only shipped in tiny quantities on expensive dev boards. It's just too new for anything else yet, the higher performance stuff takes years from the start of design work to shipping silicon.
There are probably more MIPS cores in currently powered-up equipment than all other 32-bit ISAs combined. Another billion will be manufactured in the next month or two.
There's dead and there's dead. MIPS is only dead in the way that internal combustion engines for transportation are dead.
MIPS is sort of the original RISC, since one of the two originators of the RISC concept (Hennessy) was a founder of MIPS Computer Systems in 1984. POWER is also RISC, but basically doesn't have any traction beyond IBM itself. SPARC lives on only in radiation-hardened form for the European space agency. Various others never even achieved that level of commercial significance. So it's basically "less RISC than it used to be" Arm, and "more RISC than it used to be" x86.
But there is one notable CISC still out there: IBM's mainframe z/Architecture. Might not sell a lot of units, but it's still pretty important commercially.
> SPARC lives on only in radiation-hardened form for the European space agency.
Worth noting that POWER similarly lives on in radiation-hardened form for NASA; the RAD750 (and its predecessor, the RAD3000) both have a long history of interplanetary use.
MIPS also sees some use here, too; for example, the MIPS-based Mongoose-V is what New Horizons uses, and the KOMDIV-32 ostensibly is designed for Russian spacecraft use (but I don't know of any specific examples).
As for what else is out there in large numbers, I think a lot of 8bit Z80 and AVR microcontrollers (like the ATmega328 used in Arduino Uno) are embedded in larger systems and go unnoticed. More recently I've seen the Xtensa (32bit RISC) catching on with the popular ESP32 and ESP8266 SOCs.
AST2500 (popular as BMC chip) has ColdFire coprocessor, TALOS2 and Blackbird (truly open source POWER9 workstations) use it (used?) to bit bang one of the protocols necessary to start up the CPUs
I wrote a lot of code for it 12 years ago in embedded programming. It was already getting less popular back then so I guess you're right. Haven't seen one in forever.
The Apollo Vampire is a newly-designed implementation on FPGA's that compares very favorably wrt. performance with existing 68k (and ColdFire) hardware.
I assume that all modern CPUs are CISC-ish opcodes running in microcode on a RISC-ish core, and it’s been a few decades since those were useful labels.
There's nothing much that's CISC-like about ARM, MIPS or RISC-V. Even many μC architectures are closer to "RISC" than "CISC", though there's at least some room for exceptions there.
RISC has never been about reduced number of instructions, but their complexity. A SIMD extension built around load/store architecture is quite compatible with the principles of RISC, despite the fact that such an extension might have very numerous instructions.
Well, I learned something today:) I'd always assumed that RISC meant a reduced number of instructions, but that's clearly not the case. Wikipedia phrases it as,
> The term "reduced" in that phrase was intended to describe the fact that the amount of work any single instruction accomplishes is reduced—at most a single data memory cycle—compared to the "complex instructions" of CISC CPUs that may require dozens of data memory cycles in order to execute a single instruction.[23] In particular, RISC processors typically have separate instructions for I/O and data processing.
So yeah, if SIMD instructions can execute in a single cycle (or maybe even a small number), then it still counts.
32-bit Arm has some features that make it less RISC-like than others:
- Predication as a major architectural feature -- every instruction can be conditionally executed
- Complex load-store instructions: ldm/stm can operate on a large set of registers in a single instruction, including performing a branch by loading into the instruction pointer
- 16-bit Thumb instruction format (also optionally present in RISC-V and newer MIPS)
64-bit Arm mostly drops all of the above and is basically a traditional RISC implementation.
RISCs still largely operate on the architected instructions in the pipelines.
It's also a thing to fuse smaller operations into macro-ops in many microarchitectures.
All high performance chips avoid running microcode, it's reserved for eg emulating seldom used legacy instructions. Microcoded CPUs (where all instuctions are implemented with microcode) were a 80s/90s thing.
I remember reading a PCWorld article comparing various processors. This was the time the Pentium came out. If my memory servers, the MIPS won the overall performance crown vs Pentium, PowerPC, and Alpha AXP.