Hacker Newsnew | past | comments | ask | show | jobs | submit | monocasa's commentslogin

Idk, it looks like most of what this person is complaining is that they don't see a lot of this in high volume consumer products. But like, most high volume comsumer products don't have to crank nearly the same amount of torque either.

It's a silly product, but as far as being over engineered, it looks like it's about what I'd expect for those requirements.


Have you ever changed a tyre on a car?

If so, you may have noticed the jack you used didn't have several huge CNC machined aluminium parts, a seven-stage all-metal geartrain, or a 330v power supply and it probably didn't cost you $700. Probably it cost more like $40.

And sure, a consumer kitchen product needs to look presentable and you don't want trapping points for curious little fingers. But even given that, you could deliver a product that worked just as well for just as long at a far lower BOM cost.


Something is overengineered for the actual problem even if it's necessary to meet the requirements, if the requirements are themselves unnecessary. Imagine speccing a 100m span to cross a small stream. The resulting bridge can reasonably be called overengineered.

You can achieve the same goal (getting juice from diced fruit without cleanup) much easier with different requirements. The post mentions that.


iButtons are in fact a 1Wire implementation.

I imagine this is mostly an acquihire to bolster the same teams that the Nuvia acquisition did.

The $2B deal with Intel fell through. Thought they were arguably worth more on paper then. My guess is that they're in a weird place where a fair offer at the moment is less than the investment they've gotten so far.

Note that the $2 billion deal story was always "according to people with knowledge of the matter", and I wonder if it was nothing more than Intel taking a peek at Sifive's technology and books.

https://archive.is/FVMLI#selection-3331.81-3331.129


They almost got bought by Intel, but then even Intel noped out.

https://www.tomshardware.com/news/intel-failed-to-buy-sifive


Ventana's cores were 15 instruction wide, massively out of order cores that on paper compete with the application cores in Apple's M series SoCs.

They're a totally different gate count niche than a Cortex-M equivalent.


Yea, this to me signals that Qualcomm is starting to hedge its ARM bets. Given all the kerfuffle around licensing they have had with ARM already, I suspect that they are signaling to ARM that they have options and so ARM's leverage is a lot lower than it might be. That said, there are also huge switching costs to Qualcomm's customers, so this is not a move it takes lightly. In the mean time, I'm sure those Ventana engineers can also help them improve their ARM designs, too.

My guess is that this was mostly an acquihire. I had heard that Ventana had a lot of people that were laid off from Intel for instance.

I would guess the same. Although Android is adding support for RISC-V so I could potentially see them looking into RISC-V Android phones.

Feels kind of unlikely though. Ventana probably ran out of money.


Maybe Ventana's software engineers can also help Qualcomm fix its BSPs.

  .
  .
  .
I can dream, right?

Fully agree - Ventana's cores are more like Cortex A76 kinds of things, and are on a completely different scale from typical Cortex-M cores.

But switching to RISC-V would shut Qualcomm out from QNX and would limit its Android compatibility. And on the Qualcomm chips that I've seen so far, they're really bought in on both QNX and Android. That's why I think this is probably an aquihire more than a desire to ship Ventana's CPU cores.


> Ventana's cores are more like Cortex A76 kinds of things

More like Neoverse-V3: https://www.ventanamicro.com/technology/risc-v-cpu-ip/

BTW: "Silicon platforms launching in early 2026."

I wonder if this will be delayed due to the acquisition.


Doubtful. To have silicon in early 2026 would mean tapeout happened months ago.

Porting QNX would be very possible.

64 bit generally adds about 20% to the size of the executables and programs as t to last on x86, so it's not that big of a change.

Even the open source drivers without those hacks are massive. Each type of card has its own almost 100MB of firmware that runs on the card on Nvidia.

That's 100MB of RISC-V code, believe it or not, despite Nvidias ARM fixation.

Maybe, it's unclear at the moment.

Apple is known to be one of the kings of putting their suppliers over a barrel. There's a good chance this is mainly a move to negotiate a better deal with TSMC, and even if it's not, the chance that Intel gets a boat load of profit out of it is very small.

And historically when fabs have been separated from a business, it's always been in a way to shed a capital intensive albatross. In that case, they're normally loaded up with so much debt in the divorce that they were essentially never intended to succeed or continue to keep up, but instead just barely stay afloat on the already capitalized investment.


> the chance that Intel gets a boat load of profit out of it is very small

Why? TSMC seems to be doing ok. It’s worked with RAM and SSD suppliers the same way and they seem to be doing ok too. So does Foxconn. Apple has been known to subsidise leading edge nodes in exchange for priority or temporary exclusivity, and is absolutely ruthless, but it does not prevent its partners from being successful.

> And historically when fabs have been separated from a business, it's always been in a way to shed a capital intensive albatross

That is true. But there are other factors that might be worth considering. First, Apple hates being dependent on a single supplier (which is a single point of failure). Then, hedging risks related to the security situation in Taiwan makes sense. Whether it means subsidising a new TSMC plant in the West or subsidising a new Intel plant might not be that huge a difference. Finally, it might apply some gentle and friendly pressure on TSMC by threatening to shift some production to a competitor.

Whether all this makes sense or not depends on quantitative and qualitative analysis based on data we don’t really have.


A lot of the RISC architectures do something similar (sign extend rather than zero extend) when using 32 ops on a 64 bit processor. MIPS and PowerPC come to mind off of the top of my head. Being careful about that in the spec basically lets them treat 32-bit mode on a 64-bit processor as just 'mask off the top bits on any memory access'. Some of these processors will even let you use 64bit ops in 32bit mode, and really only just truncate memory addresses.

So the real question is why does x86 zero extend rather than sign extend in these cases, and the answer is probably that by zero extending, with an implementation that treats a 64bit architectural register as a pair 32bit renamed physical registers, you can statically set the architectural upper register back on the free pool by marking it as zero rather than the sign extended result of an op.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: