Hacker Newsnew | past | comments | ask | show | jobs | submit | bri3d's commentslogin

Claude is doing the decompilation here, right? Has this been compared against using a traditional decompiler with Claude in the loop to improve decompilation and ensure matched results? I would think that Claude’s training data would include a lot more pseudo-C <-> C knowledge than MIPS assembler from GCC 2.7 and C pairs, and even if the traditional decompiler was kind of bad at N64 it would be more efficient to fix bad decompiler C than assembler.

They are contractors. The public face of Ghidra works at Praxis, for example.

Yes, it’s from the late 90s/early 00s, but why is it strange to see Java?

Agree. IDA is surely the “primary” tool for anything that runs on an OS on a common arch, but once you get into embedded Ghidra is heavily used for serious work and once you get to heavily automation based scenarios or obscure microarchitectures it’s the best solution and certainly a “serious” product used by “real” REs.

For UI based manual reversing of things that run on an OS, IDA is quite superior; it has really good pattern matching and is optimized on this use case, so combined with the more ergonomic UI, it’s way way faster than Ghidra and is well worth the money (provided you are making money off of RE). The IDA debugger is also very fast and easy to use compared to Ghidra’s provided your target works (again, anything that runs on an OS is probably golden here).

For embedded IDA is very ergonomic still, but since it’s not abstract in the way Ghidra is, the decompiler only works on select platforms.

Ghidra’s architecture lends itself to really powerful automation tricks since you can basically step through the program from your plugin without having an actual debug target, no matter the architecture. With the rise of LLMs, this is a big edge for Ghidra as it’s more flexible and easier to hook into to build tools.

The overall Ghidra plugin programming story has been catching up; it’s always been more modular than IDA but in the past it was too Java oriented to be fun for most people, but the Python bindings are a lot better now. IDA scripting has been quite good for a long time so there’s a good corpus of plugins out there too.


It’s better in some dimensions and not others, and it’s built on a fundamentally different architecture, so of course they use both.

Ghidra excels because it is extremely abstract, so new processors can be added at will and automatically have a decompiler, control flow tracing, mostly working assembler, and emulation.

IDA excels because it has been developed for a gazillion years against patterns found in common binaries and has an extremely fast, ergonomic UI and an awesome debugger.

For UI driven reversing against anything that runs on an OS I generally prefer IDA, for anything below that I’m 50/50 on Ghidra, and for anything where IDA doesn’t have a decompiler, Ghidra wins by default.

For plugin development or automated reversing (even pre LLMs, stuff like pattern matching scripts or little evaluators) Ghidra offers a ton of power since you can basically execute the underlying program using PCode, but the APIs are clunky and until recently you really needed to be using Java.


When you buy a subscription plan, you’re buying use of the harness, not the underlying compute / tokens. Buying those on their own is way more expensive. This is probably because:

* Subscriptions are oversubscribed. They know how much an “average” Claude Code user actually consumes to perform common tasks and price accordingly. This is how almost all subscription products work.

* There is some speculation that there is cooperative optimization between the harness and backend (cache related etc).

* Subscriptions are subsidized to build market share; to some extent the harnesses are “loss leader” halo products which drive the sales of tokens, which are much more profitable.


They’re players in a newish market segment called “hyperconverged,” basically “you buy a rack and it runs your workload, you don’t worry about individual systems/interconnect/networking etc because we handled it.”

Oxide seem to be the best and most thorough in their space because they have chosen to own the stack from the firmware upwards. For someone who cares in that dimension they are a clear leader already on that basis alone, for other buyers who don’t, hopefully it also makes their product superior to use as well.


Microsoft and Nutanix have had a hyperconverged architecture for over a decade. Oxide is mostly an alternative to Nutanix or other soup-to-nuts private clouds.

Oxide is a really nice platform. I keep trying to manipulate things at work to justify the buy in (I really want to play wiht their stuff), but they aren't going for it.


Afaik nutanix doesn't sell a custom rack, running custom firmware, preloaded with their software though.

The first attempts at hyperconverged were very hardware focused and kinda meh. Nutanix is the best example - they pioneered hyperconverged hardware but the firmware/software was extremely average. Oxide are the first to say "it should just feel like cloud, except you own it" and building for that.

Oxide hardware is very well put together

I'm a bit puzzled because this seems backwards from what I thought had been the evolution of things.

Didn't companies historically own their own compute? And then started offloading to so-called cloud providers? I thought this was a cost-cutting measure/entry/temporary solution.

Or is this targeting a scale well beyond the typical HPC cluster (few dozen to few hundred nodes)? I ask because those are found in most engineering companies as far as I know (that do serious numerical work) as well as labs or universities (that can't afford the engineers and technicians companies can).

Also, what is the meaning of calling an on-prem machine "cloud" anymore? I thought the whole point of the cloud was that the hardware had been abstracted (and moved) away and you just got resources on demand over the network. Basically I don't understand what they're selling if it's not what people already call clusters. And then if the machine is designed, set up and maintained by a third party, why even go through the hassle of hosting it physically, and not rent out the compute?


> Didn't companies historically own their own compute?

As group-of-cats racks, usually, which is a totally different thing. Way "back in the day" you'd have an IT closet with a bunch of individually hand-managed servers running your infrastructure, and then if you were selling really oldschool software, your customers would all have these too, and you'd have some badly made remote access solution but a lot of the time your IT Person would call the customer's IT Person and they'd hash things out.

Way, way, way back in the day you'd have a leased mainframe or minicomputer and any concerns would be handled by the support tech.

> I thought the whole point of the cloud was that the hardware had been abstracted (and moved) away and you just got resources on demand over the network.

This idea does that, but in an appliance box that you own.

> And then if the machine is designed, set up and maintained by a third party, why even go through the hassle of hosting it physically, and not rent out the compute?

The system is designed by a third party to be trivially set up and maintained by the customer, that's where the differentiation lies.

In the moderately oldschool way: pallets of computers arrive, maybe separate pallets of SAN hosts arrive, pallets of switches and routers arrive. You have to unbox, rack, wire, and provision them, configure the switches, integrate everything. If your system gets big enough you have to build an engineering team to deal with all kinds of nasty problems - networking, SAN/storage, and so on.

In the other really oldschool way: An opaque box with a wizard arrives and sometimes you call the wizard.

In this model: you buy a Fancy Box, but there's no wizard. You turn on the Fancy Box and log into the Deploy a Container Portal and deploy containers. Ideally, and supposedly, you never have to worry about anything else unless the Big Status Light turns red and you get a notification saying "please replace Disk 11.2 for me." So it's a totally different model.


> Didn't companies historically own their own compute?

Historically, companies got their compute needs supplied by mainframe vendors like IBM and others. The gear might have sat on premises in a computer room/data center, but they didn't really own it in any real sense.

> Basically I don't understand what they're selling if it's not what people already call clusters.

Is it really a cluster when the whole machine is an integrated rack and workloads are automatically migrated within the rack so that any impending failure doesn't disrupt operation? That's a lot closer to a single node.


So a bit like SeaMicro in the 00's but with more software?

I don’t know who they see as competitors in market positioning (ie, who is selling against them on their target buyer’s calendar). But the space is called hyperconverged computing and there are a few other players like Scale Computing building “racks you buy that run your VMs.”

What would be the point of this change? It erodes security in some moderately meaningful way (even easier to supply chain new computers by swapping the boot disk) to add what amounts to either a nag screen or nothing, in exchange for some ideological purity about Microsoft certificates?

It really doesn't. UEFI are still not by default locked behind a password (can't be locked since you couldn't change settings in the UEFI if that were the case), so anyone that has access to change a drive can also disable secure boot or enroll their own keys if they want to do an actual supply chain attack.

If your threat model is "has access to the system before first boot" you are fucked on anything that isn't locked down to only the manufacturer.


What if my threat model is "compromised the disk imaging / disk supply chain?" This is a plausible and real threat model, and represents a moderate erosion, like I said.

UEFI Secure Boot is also just not a meaningful countermeasure to anyone with even a moderate paranoia level anyway, so it's all just goofing around at this point from a security standpoint. All of these "add more nag screens for freedom" measures like the grandparent post and yours don't really seem useful to me, though.


> UEFI Secure Boot is also just not a meaningful countermeasure to anyone with even a moderate paranoia level

Baseless FUD. If you have an actual point to make then do so.

> All of these "add more nag screens for freedom"

No one said anything about a nag screen. You literally just made that up.

For the record google pixels work largely this way. Flash image, test boot, re-lock bootloader.


> Baseless FUD.

This is a fascinating thing to post on an article about… bypassing UEFI Secure Boot?

PKFail, BlackLotus/BatonDrop, LogoFail, BootHole, the saga continues. If you’ve ever audited a UEFI firmware and decided it’s going to protect you, I’m not sure what to tell you.

To be clear, it’s extremely useful and everyone should be using it. It’s also a train wreck. Both things can be true at the same time. Using Secure Boot + FDE keys sealed to PCRs keeps any rando from drive bying your machine. It also probably doesn’t stop a dedicated attacker from compromising your machine.

> No one said anything about a nag screen.

The parent post suggested that Secure Boot arrive in Setup Mode. Either the system can automatically enroll the first key it sees from disk (supply chain issue, like I posted) or nag screen a key hash / enrollment process. Or do what it does today.

> For the record google pixels work largely this way. Flash image, test boot, re-lock bootloader

So do UEFI systems. Install OS, test boot, enroll PK. What the OP is proposing is basically if your Android phone arrived and said “Hi! Would you like to trust software from Google?!?!” on first boot.


And how many times has Intel's trusted computing platform been breached now? Would you also claim that SGX is not a meaningful security measure? Recall that the alternative to SecureBoot is ... oh that's right, there isn't an equivalent alternative.

People have broken into bank vaults. That doesn't mean that bank vaults don't provide meaningful security.

> So do UEFI systems. Install OS, test boot, enroll PK.

"Enroll PK" is "draw the rest of the fucking owl" territory.

I believe you somewhat misunderstood OP. The description was of the empty hardware. Typical hardware would ship with an OS already installed and marked as trusted. It's the flow for changing the OS that would be different.

> automatically enroll the first key it sees from disk (supply chain issue, like I posted)

I'm unconvinced. You're supposing an attacker that can compromise an OEM's imaging solution but not the (user configurable!) key store? That seems like an overly specific attack vector to me.


The breach in TFA happened because Microsoft actually did something benevolent and it blew up on their face. Now almost all of the hardware that takes security a bit seriously (basically expensive business class computers) have to upgrade their UEFI FW (many have already done ao via Windows Update).

No single point of failure will protect you fully. UEFI SB is just one layer. And nobody ever would protect you from a dedicated nation state (except another nation state). Unless you own the entire supply chain from silicon contractors all the way up to every single software vendor and every single network operator, you cannot fully prove things aren't snitching on you.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: