Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Is MIPS Dead? Lawsuit, Bankruptcy, Maintainers Leaving and More (cnx-software.com)
80 points by abawany on April 23, 2020 | hide | past | favorite | 90 comments


MIPS dying is really the end of an era. I think it was the quintessential RISC processor of the early 90's. IIRC, Windows NT was developed on a MIPS machine to ensure that inadvertent x86 dependencies did not sneak into the code.

I remember reading a PCWorld article comparing various processors. This was the time the Pentium came out. If my memory servers, the MIPS won the overall performance crown vs Pentium, PowerPC, and Alpha AXP.


> IIRC, Windows NT was developed on a MIPS machine to ensure that inadvertent x86 dependencies did not sneak into the code.

Not MIPS. Windows NT was initially developed on a more obscure RISC architecture, Intel i860. It was later ported to MIPS though.


Ah! The fabled Intel i860. I remember the massive amount of hype at its debut. It was touted as "Cray on a chip".

I got to play with an Intel iPSC supercomputer in our Physics lab (at university) that used the i860. We had a Sun box that hosted cross-compilers for it and allowed you to reserve a specific number of i860 processors for you. The number had to be a power of 2. Fun times!


And ported to DEC Alpha too! It's one of my weirdly favorite ports of NT just because of how little use I believe it ever actually got.


The Alpha port made it through to early release candidates of Windows 2000, and somebody is using it, because there were PuTTY builds for it.


I’m pretty sure the PowerPC port of Windows NT was the least used. It was the first to be discontinued, at least.


I've seen live Alpha installations of windows NT driving RIPs at large format printing shops in the 90s. It was also not unheard of at VMS shops.


This reads like a Blade Runner quote.


Those C-beams.


Even Microsoft make Jazz: https://en.wikipedia.org/wiki/Jazz_(computer) to develop NT.


May be its days are numbered, but MIPS still has significant presence in the low to medium end routers (Mediatek's MT72xx, Atheros AR7xxx so on).


MIPS still has a very large presence in the education sector as well - MIPS asm is very easy to learn because of how consistent it's syntax is and the plethora of open source debuggers as well.

Learning MIPS was what originally got me interested in ASM programming since we had a class that was focused on MIPS code and another class that had us build a digital MIPS processor from scratch. The combination of these two classes really sold me on the magic of super low-level programming.


For those who want to take a look, "Computer Organization and Design" by Hennessy and Patterson is a very common textbook on the subject (at least here in Italy). I found reading it to be a very, very instructive experience.

Its version of 32-bit MIPS is so simple, its whole instruction set fit in a 2-side cheatsheet (the famous "green sheet"). The design of the CPU is quite easy too. Given an instruction and its binary representation, it is almost straightforward to see how each bit contributes to the computation (setting the correct ALU operation, retrieving a value from the correct register, etc.).


Also, Microchip’s PIC32 microcontrollers, which are still around, have a 32bit MIPS ISA.


Yep, I also learned MIPS in my CS class. It's pretty nice to program in.


MIPS is typically seen in computer architecture class because it is simple and regular (at least the original version seen in class). However, for Assembly programming, MIPS, like (almost?) all RISC instructions sets, is tedious. Give me any CISC with a generous range of addressing modes, and I take any day over MIPS/RISC.

We can make a parallel between those low-level ISA and high-level languages: a language like Lisp is lean and simple so it is taught and presented as good design (and people who went through that education keep that in memory), but when it comes to produce real program almost everybody chooses a much less regular language, which is way more practical. (Same could be said for stack-based languages like Forth, which present an extremely simple model to apprehend, but that doesn't mean at all that it is simple to program in.)

Or postfix vs infix for mathematical expressions/calculations. Same principle: the one which is based on a very simple model is praised by aesthetes, but almost everybody prefers the other one, which is simpler to use because it is more natural, despite being based on a more complex model.

In fact, the simplicity of the model is not of much interest for the user, it just makes the life of the implementer easier. But for 1 implementer, there are thousands or millions of users, who want ease of use, not ease of implementation.


Traditional addressing modes have been largely abandoned because modern architectures are based on the load-store principle. Simplicity has little to do with it, and referring to that whole shift in design as "complex" vs. "simple" instruction sets is a bit of a misnomer. Besides, well-designed architectures are not exactly lacking in ease-of-use.


This is only "sort of" true.

First, mod-r/m addressing on x86 is fairly traditional and can often save considerable calculation over a "simpler" addressing mode (given the opportunities for add-and-scale operations).

Second, treating x86 machines as load/store architectures passes up the opportunity to achieve improved code density and increased execution bandwidth from "microfusion" - this is when a operation (e.g. "add") is done with a memory operand. Microfusion, for those not familiar with it, allows two "micro-ops" (aka uops) that originate from the same instruction to be "fused" - that is, issued and retired together (even though they are executed separately).

This can occasionally - in code that has already been militantly tuned to an inch of its life - yield speedups, as Skylake and similar can only issue and retire 4 uops per cycle. However, there are 8 execution ports (of which only 4 do traditional 'computation'). Carefully designed code can take advantage of the fact that issue/retire are in the "fused domain" while execute is "unfused domain" - so you can sometimes get 4 computations and 1 load per cycle even on a 4-issue machine.

I was trained on MIPS and Alpha, so of course old habits die hard, and it's always tempting to go old school and design everything to act as if the underlying machine is a load-store architecture. However, this (a) isn't necessary on x86 and (b) often won't be faster.

The other blow against load-store is that a modern o-o-o architecture can hoist the load and separate it from the use anyway - and it doesn't have to consume a named register to do it (it will use a physical register, of course, but x86 has way more physical registers than it has names for registers). This of course is a bigger deal for the rather impoverished register count of x86 so it is, in the words of a former Intel colleague on a different topic, a "cure for a self-inflicted injury".


Well I don't think most people these days are interacting with assembly by hand-writing programs. The real users of ISAs now are compiler authors, and the simpler and regular languages seem better for them. So is there some other reason (other than inertia) that RISC isn't practical?


I think this is what VLIW exposed. Real world optima are a union of hardware, microarch, compiler, and software optima.

Even a global optimum for one is unlikely to be an efficient solution for all.


> Give me any CISC with a generous range of addressing modes, and I take any day over MIPS/RISC.

Or how about a macro assembler where you can do that with custom pseudo-instructions?


It’s nice until you get to its annoying and fairly irrelevant pipelining model.


The ubiquitous delay slots in MIPS are one instruction-set feature that has aged really badly. RISC-V actively got rid of it in their design because it ends up being such a hindrance to, e.g. out-of-order implementations.


It's also a hindrance to in-order implementations that have a different number of branch delay cycles (e.g. different number of pipeline stages or instructions taking a variable number of cycles) than the original implementation.

Branch delay slots were a somewhat clever solution to reduce the complexity of the original implementation, but they baked implementation details into the ISA and became problematic when the implementation details changed.


> they baked implementation details into the ISA and became problematic when the implementation details changed.

Same reason why stuff like VLIW has failed to catch on. These things are so dependent on specific hardware implementation details that one can hardly call them general-purpose ISA's anymore.


Moderns GPUs are VLIW machines.


No modern GPUs use VLIW. Ati/AMD switched from VILW to RISC-SIMD 8-9 years ago, NVIDIA a few years before that. Mobile phone GPUs gave up VILW for RISC too in the last 5 years or so.


DSPs as well.


And neither can be used as a compilation target for, say, Firefox, (or simpler, nethack) can they ?


Analog Devices provides a C/C++ compiler and a RTOS for SHARC, so I wouldn't be surprised if nethack could be compiled for the SHARC VLIW architecture (and its two branch delay slots).


well I stand corrected, it looks like there's even a fopen and fprintf in there so that would make a lot of things possible. I wonder about the performance for branch-heavy, non-vector-math computations on these CPUs.


Can confirm, I was taught MIPS assembly in a computer architecture course as recently as 2 years ago.


I have a crappy nas that is running a MIPS cpu. Basically the only thing I can run on it is Debian because every other distro seems to have dropped support.


And I was surprised to find out Wyze brand cameras are MIPS, too! Lots of embedded systems likely still using MIPS.


Those are really starting to be eaten by ARM over the past few years.


KOMDIV-64[1] is still using MIPS. It's improbable that one would encounter these chips outside of Russia weapons tech though.

[1]https://en.wikipedia.org/wiki/KOMDIV-64



Relevantly, Richard Stallman used to use a Lemote YeeLoong (notebook with a Loongson MIPS CPU) before he switched to a librebooted ThinkPad T400.


ARM and RISC-V have collectively removed any remaining niches (or future prospects) for MIPS.


Except that there aren't really any practical (affordable!) RISC-V based Linux boards/modules.

The Onion Omega2S is based on a MediaTek MT7688 (MIPS32LE) and since the demise of the CHIP-Pro, is really the only inexpensive surface mount Linux SoM left.

https://onion.io/store/omega2s/


The Omega2 is pretty nice, but they do need to improve the ecosystem.


That's why I said also "future prospects". It's likely RISC-V will for the very least fill certain niches.


Right, I just hope they do it quickly! I'd love to see a low cost (e.g. RPi priced) RISC-V based SoM that runs Linux.


I can't wait for there to be affordable MMU-equipped RISC-V dev boards.


WAVE computing used to be working on CGRA [1] chip as deep learning accelerators. I got in touch with a few folks there. They apparently failed that endeavor and pivoted to MIPS.

[1] https://wavecomp.ai/wp-content/uploads/2018/12/WP_CGRA.pdf


It's already been bought+sold a few times.

  (Stanford)
  MIPS Computer Systems
  SGI
  Spun off, IPO
  Imagination
  Tailwood
  Wave
I might have missed one.


I don't think MIPS is going to completely disappear as long as Linux works on it. Someone somewhere in some country will keep making them as long as the licensing situation is more favorable for whatever use than ARM or x86. If the owning company is going bankrupt that means making MIPS CPUs will be cheaper than ARM or x86.


In China they have lots of new MIPS developments based on existing MIPS architecture.

The question it's not if someone will use some old MIPS ISA in a few years from now on, the question is if someone will improve the ISA from now on, in the same way x86 and ARM are consistently being improved.

Some companies are still fabbing old Z80 CPUs, but that's not to say Z80 has a bright future.


Ingenic an Loongson both have architecture licenses and so far have kept releasing new chips with their own cores on a regular basis, including (in Loongson's case) some interesting enhancements. Both are also members in the RISC-V foundation already though, so it seems likely they would in the long run pivot their instruction sets to that, like others have done before them: Andes, C-Sky, Cortus, Cobham-Gaisler, NVIDIA, and presumably many more all keep supporting old products based on their previous designs while doing new development on RISC-V.

CIP-United still promises to provide enhanced versions of the both the architecture and the MIPS Warrior cores for the Chinese market, regardless of what happens to MIPS Technologies. This may seem utterly futile now, but it is also the very thing that the US Committee on Foreign Investment was trying to prevent when it required MIPS to be spun out of Imagination Technologies when that got sold to Chinese investors.


Hardly. It means the IP will be sold off to someone else, who might just make it more expensive to milk some profit from the deal while pushing customers to their own platform.


I was under the impression what MIPS was open sourced, in which case "owner" company should be irrelevant?


Not so; there was a short-lived initiative to work towards some kind of open release, but it came to nothing. The article mentions that.


The writing's been on the wall for MIPS for ages. AFAIK it's not better than ARM in any meaningful way.


The creator of ARM speaks well of RISC-V and considers ARM yesterday's old news and has moved on. Comparing MIPS to ARM in essentials of architecture is MIPS more beautiful? ARM I understand is a mess that has evolved for its niche.


I find the latest architecture versions are all remarkably similar as they have all adapted to the same environment:

The old 32-bit Arm (now called Aarch32) was quite different and only somewhat RISC-like. Arm's Aarch64 however is mostly derived from MIPS64 with a lot of modernization plus some parts (exception levels) from 32-bit Arm.

MIPSr6 was an attempt of modernizing MIPSr5 by removing all the ugly bits (delay slots!) but the incompatible instruction encoding prevented it from being widely adopted. You cannot buy a single MIPSr6 machine that a mainline Linux runs on.

RISC-V's design looked at all RISC architectures (Berkely RISC, MIPS, SPARC, Power, Arm, ...) for inspiration and took the best parts of each. Leaving out all the historic baggage means it's simpler (the manual is a fraction of the size), but most of the important decisions are the same as in MIPSr6 and Armv8/Aarch64.

One notable difference is the handling of compressed (16-bit) instructions: ARMv8/Aarch64 doesn't have them at all (like RISC-I/RISC-II, ARMv3 and MIPS-V), MIPSr6/microMIPS needs to switch between formats (like ARMv4T through ARMv6) and in RISC-V they are optional but can be freely mixed (somewhat like ARMv7 and nanoMIPS).


It's disappointing that RISC-V designers swallowed myths that resulted in unpleasant ISA details.

For example, the notion that condition codes interfere with OoO execution has been repudiated; Power and x86 both now rename condition registers. Lack of popcount and rotate in the base instruction set are glaring omissions. (That x86 got popcount late, and that the bitmanip extension will have them if it ever gets ratified, are no excuse.) It was silly to make the compare instruction generate a 1 instead of the overwhelmingly more useful ~0.

We only get a new ISA once in a generation. It is tragic when it is wrong.

It is possible, in principle, that popcount and rotate could be added to the base 16-bit instructions, but I'm not holding my breath.


Well, good times were had!


Licensing?


My experience is that MIPS and ARM licensing are equally bad.

As an anecdote, I have personally switched a MIPS core to an ARM core in an SoC revision specifically because ARM gave us better licensing terms than MIPS. It was a pain because MIPS big endian and ARM big endian are not directly compatible with each other.


Maybe it has changed, or maybe it depends on what level of IP you're licensing. The company I worked at in 2006-2009 had gone with MIPS because they were incorporating that with all sorts of other logic (from third parties plus custom) and the cost for that type of license from Arm would have bankrupted them instantly.


Isnt mips open source ?


I remember listening to some guy from ImgTec brag about how MIPS is the 4th biggest instruction set in the world as if MIPS had like 20% market share rather 1% or whetever ridiculous market share ImgTec had bought their way into. That was literally at the peak when ImgTec were on top of the world, they'd just built a new campus and were spending like there was no tomorrow. Then Apple drank their milkshake, and they had to sell everything. Even then though, it was really bizarre that anyone was still trying to make MIPs a thing.

MIPS has been the walking dead for a couple of decades now. It's really time to let go. Especially since there are actually interesting things happening in the ISA space with RISC-V.


Pardon my ignorance, but... is RISC-V all that much more alive than MIPS? My impression is that MIPS has a past, RISC-V may have a future, but neither has much of a present.


For desktop or server class stuff, RISC-V is still shaking out but there are attempts to make it happen on-going right now. For everything else you've got companies like Western Digital that have committed to shipping a billion[2] RISC-V processors for their own devices, and many others. [1]

Just take a look at their membership page for people funding or helping develop the processors, https://riscv.org/members-at-a-glance/

[1] https://www.westerndigital.com/company/innovations/risc-v [2] https://www.extremetech.com/computing/281891-western-digital...


The only hardware actually currently shipping in meaningful quantities are a handful of microcontrollers. The U540 is pretty much a Raspberry Pi level of SoC but it's only shipped in tiny quantities on expensive dev boards. It's just too new for anything else yet, the higher performance stuff takes years from the start of design work to shipping silicon.


There are probably more MIPS cores in currently powered-up equipment than all other 32-bit ISAs combined. Another billion will be manufactured in the next month or two.

There's dead and there's dead. MIPS is only dead in the way that internal combustion engines for transportation are dead.


Consolidation, together with buyouts and bankruptcy made a part of the computing world much more boring.

In the '90s and early 2000s there were lots of CPUs, computer systems, operating systems.


Please help me remove some of my ignorance:

ARM = RISC?

Intel = CISC?

? = MIPS?

What else is out there in large amounts?


MIPS is sort of the original RISC, since one of the two originators of the RISC concept (Hennessy) was a founder of MIPS Computer Systems in 1984. POWER is also RISC, but basically doesn't have any traction beyond IBM itself. SPARC lives on only in radiation-hardened form for the European space agency. Various others never even achieved that level of commercial significance. So it's basically "less RISC than it used to be" Arm, and "more RISC than it used to be" x86.

But there is one notable CISC still out there: IBM's mainframe z/Architecture. Might not sell a lot of units, but it's still pretty important commercially.


> SPARC lives on only in radiation-hardened form for the European space agency.

Worth noting that POWER similarly lives on in radiation-hardened form for NASA; the RAD750 (and its predecessor, the RAD3000) both have a long history of interplanetary use.

MIPS also sees some use here, too; for example, the MIPS-based Mongoose-V is what New Horizons uses, and the KOMDIV-32 ostensibly is designed for Russian spacecraft use (but I don't know of any specific examples).


MIPS itself is a RISC, see https://en.wikipedia.org/wiki/MIPS_architecture

As for what else is out there in large numbers, I think a lot of 8bit Z80 and AVR microcontrollers (like the ATmega328 used in Arduino Uno) are embedded in larger systems and go unnoticed. More recently I've seen the Xtensa (32bit RISC) catching on with the popular ESP32 and ESP8266 SOCs.


A number of others you might encounter in your embedded devices: 68k, AVR, Z80. (CISC, RISC, CISC, if you’re curious.)


There’s (sadly) not that much 68k left these days, is there? Do you know of any specific companies still using them? (maybe ones that are hiring? :)


AST2500 (popular as BMC chip) has ColdFire coprocessor, TALOS2 and Blackbird (truly open source POWER9 workstations) use it (used?) to bit bang one of the protocols necessary to start up the CPUs


I wrote a lot of code for it 12 years ago in embedded programming. It was already getting less popular back then so I guess you're right. Haven't seen one in forever.


I think either Airbus or Boeing (or both?) use them in some of their planes (A320 and/or 737)


In terms of hardware: NXP ColdFire.


The Apollo Vampire is a newly-designed implementation on FPGA's that compares very favorably wrt. performance with existing 68k (and ColdFire) hardware.


I assume that all modern CPUs are CISC-ish opcodes running in microcode on a RISC-ish core, and it’s been a few decades since those were useful labels.


There's nothing much that's CISC-like about ARM, MIPS or RISC-V. Even many μC architectures are closer to "RISC" than "CISC", though there's at least some room for exceptions there.


> There's nothing much that's CISC-like about ARM,

I struggle to think that ARM can really be called reduced-instruction after neon or so.


RISC has never been about reduced number of instructions, but their complexity. A SIMD extension built around load/store architecture is quite compatible with the principles of RISC, despite the fact that such an extension might have very numerous instructions.


Well, I learned something today:) I'd always assumed that RISC meant a reduced number of instructions, but that's clearly not the case. Wikipedia phrases it as,

> The term "reduced" in that phrase was intended to describe the fact that the amount of work any single instruction accomplishes is reduced—at most a single data memory cycle—compared to the "complex instructions" of CISC CPUs that may require dozens of data memory cycles in order to execute a single instruction.[23] In particular, RISC processors typically have separate instructions for I/O and data processing.

So yeah, if SIMD instructions can execute in a single cycle (or maybe even a small number), then it still counts.


Why? NEON operates on registers, so the load-store principle is still fully in effect. And it maps directly to special-purpose hardware.


32-bit Arm has some features that make it less RISC-like than others:

- Predication as a major architectural feature -- every instruction can be conditionally executed - Complex load-store instructions: ldm/stm can operate on a large set of registers in a single instruction, including performing a branch by loading into the instruction pointer - 16-bit Thumb instruction format (also optionally present in RISC-V and newer MIPS)

64-bit Arm mostly drops all of the above and is basically a traditional RISC implementation.


RISCs still largely operate on the architected instructions in the pipelines.

It's also a thing to fuse smaller operations into macro-ops in many microarchitectures.

All high performance chips avoid running microcode, it's reserved for eg emulating seldom used legacy instructions. Microcoded CPUs (where all instuctions are implemented with microcode) were a 80s/90s thing.


They're all RISC.


All hail Rick Belluzzo.


i learnt how cpu work on spim


nah, you only learnt how to use them ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: