Hacker Newsnew | past | comments | ask | show | jobs | submit | more IshKebab's commentslogin

> I prefer debugging C

I prefer not having to debug... I think most people would agree with that.


I prefer a billion dallars tax free, but here we are:(

In Rust dev, I haven't needed Valgrind or gdb in years, except some projects integrating C libraries.

Probably kernel dev isn't as easy, but for application development Rust really shifts majority of problems from debugging to compile time.


My current project is C++ backend. I do a lot of debugging but all of it concerns business logic, some scientific calculations and the likes. In this situations having Rust will give me exactly zero benefits. As for "safety". I am a practical man and I pay my own money for development. Being able to use modern C++ I have forgotten when was the last time I had any memory related issue. My backends run for years serving many customers with no complaints in this department. Does not mean of course they're really really safe but I sleep well ;)

If it wasn't clear, I have to debug Rust code waaaay less than C, for two reasons:

1. Memory safety - these can be some of the worst bugs to debug in C because they often break sane invariants that you use for debugging. Often they break the debugger entirely! A classic example is forgetting to return a value from a non-void function. That can trash your stack and end up causing all sorts of impossible behaviours in totally different parts of the code. Not fun to debug!

2. Stronger type system - you get an "if it compiles it works" kind of experience (as in Haskell). Obviously that isn't always the case, but I can sometimes write several hundred lines of Rust and once it's compiling it works first time. I've had to suppress my natural "oh I must have forgotten to save everything or maybe incremental builds are broken or something" instinct when this happens.

Net result is that I spend at least 10x less time in a debugger with Rust than I do with C.


Do you also want some unicorns as well?

[dead]


https://blog.cloudflare.com/incident-report-on-memory-leak-c...

better to crash than leak https keys to the internet


Nah I used to read Phoronix and the articles are a bit clickbaity sometimes but mostly it's fine. The real issue is the reader comments. They're absolute trash.

The comments section is the biggest problem, but also, in addition to clickbait, the site has a tendency to amplify and highlight anything that will produce drama, often creating a predictable tempest in a teapot.

In this context it's proofs of properties about the program you're writing. A classic one is that any lossless compression algorithm should satisfy decompress(compress(x)) == x for any x.

I would think Lean and other formal languages are the real gold standard.

But none of them really have enough training data for LLMs to be any good at them.


> Rust doesn't prevent programs from having logic errors.

Nobody ever claimed that. The claims are:

1. Rust drastically reduces the chance of memory errors. (Or eliminates them if you avoid unsafe code.)

2. Rust reduces the chance of other logic errors.

Rust doesn't have to eliminate logic errors to be a better choice than C or assembly. Significantly reducing their likelihood is enough.


Every language with a GC drastically reduces the chance of memory errors.

Yeah and I would say they're probably a better choice for vibe coding than C!

But most of them don't have a nice strong type system like Rust. I have vibe coded some OCaml and that seems to work pretty well but I wouldn't want to use OCaml for other reasons.


But if you use managed laguages you can't feel superior like when coding in Rust.

Can these claims back themselves up with a study showing that over a large population size with sufficient variety, sourced from a collection of diverse environments, LLM output across a period of time is more reliably correct and without issue when outputting Rust? Otherwise this is nothing but unempirical conjecture.

Ah the classic "show me ironclad evidence of this impossible-to-prove-but-quite-clear thing or else you must be wrong!"

Although we did recently get pretty good evidence of those claims for humans and it would be very surprising if the situation were completely reversed for LLMs (i.e. humans write Rust more reliably but LLMs write C more reliably).

https://security.googleblog.com/2025/11/rust-in-android-move...

I'm not aware of any studies pointing in the opposite direction.


Actually it's the classic "inductive reasoning has to meet a set of strict criteria to be sound." Criteria which this does not meet. Extrapolation from a sample size of one? In a context without any LLM involvement? That's not sound, the conclusion does not follow. The point being, why bother making a statistical generalization? Rust's safety is formally known, deduction over concrete postulates was appropriate.

> it would be very surprising if the situation were completely reversed for LLMs

Lifetimes must be well-defined in safe Rust, which requires a deep degree of formal reasoning. The kind of complex problem analysis where it is known that LLMs produce worse results in than humans. Specifically in the context of security vulnerabilities, LLMs produce marginally less but significantly more severe issues in memory safe languages[1]. Still though, we might say LLMs will produce safer code with safe Rust, on the basis that 100,000 vibe coded lines will probably never compile.

[1] - https://arxiv.org/html/2501.16857v1


I never claimed to be doing a formal proof. If someone said "traffic was bad this morning" would you say "have you done a scientific study on the average journey times across the year and for different locations to know that it was actually bad"?

> LLMs produce worse results in than humans

We aren't talking about whether LLMs are better than humans.

Also we're obviously talking about Rust code that compiles. Code that doesn't compile is 100% secure!


I didn't claim that you were doing formal proofs. You can still make bad rhetoric, formal or not. You can say "The sky is blue, therefore C is a memory safe language" and that's trivially inferred to be faulty reasoning. For many people bad deduction is easier to pick up on than bad induction, but they're both rhetorically catastrophic. You are making a similarly unsound conclusion to the ridiculous example above, its not a valid statistical generalization. Formal or not the rhetoric is faulty.

> would you say "have you done a scientific study on the average journey times across the year and for different locations to know that it was actually bad"?

In response to a similarly suspiciously faulty inductive claim? Yeah, absolutely.

> We aren't talking about whether LLMs are better than humans.

The point I'm making here is specifically in response to the idea that it would "be surprising" if LLMs produced substantially worse code in Rust than they did in C. The paper I posted is merely a touch point to demonstrate substantial deviation in results in an adjacent context. Rust has lower surface area to make certain classes of vulns under certain conditions, but that's not isomorphic with the kind of behavior LLMs exhibit. We don't have:

- Guarantees LLMs will restrict themselves to operating in safe Rust

- Guarantees these specific vulnerabilities are statistically significant in comparative LLM output

- The vulnerability severity will be lower in Rust

Where I think you might be misunderstanding me is that this isn't a statement of empirical epistemological negativism. I'm underlining that this context is way too complex to be attempting prediction. I think it should be studied, and I hope that it's the case LLMs can write good, high quality safe Rust reliably. But specifically advocating for it on gut assumptions? No. We are advocating for safety here.

Because of how chaotic this context is, we can't reasonably assume anything here without explicit data to back it up. It's no better than trying to predict the weather based on your gut. Hence, why I asked for specific data to back the claim up. Even safe Rust isn't safe from security vulnerabilities stemming from architectural inadequacies and panics. It very well may be the case that in reasonably comparable contexts, LLMs produce security vulnerabilities in real Rust codebases at the same rate they create similar vulnerabilities in C. It might also be the case that they produce low severity issues in C at a similar statistical rate as high severity issues in Rust. For instance, buffer overflows manifesting in 30% of sample C codebase resulting in unexploitable segfaults, vs architectural deficiencies in a safe Rust codebase, manifesting in 30% of cases, that allow exfiltration of everything in your databases without RCE. Under these conditions, I don't think it's reasonable to say Rust is a better choice.

Again, it's not a critique in some epistemological negativist sense. It's a critique that you are underestimating how chaotic this context actually is, and the knock-on effects of that. Nothing should surprise you.


Yeah but on the other hand there are plenty of human programmers that are bad at understanding complexity, make dumb mistakes, and write terrible code. Is there something fundamentally different about their brains to mine? I don't think so. They just aren't as good - not enough experience, or not enough neurons in the right places or whatever it is that makes some humans better at things than others.

So maybe there isn't any fundamental change needed to LLMs to take it from junior to senior dev.

> They still "fix" things by removing functionality or adding a ts-ignore comment.

I've worked with many many people who "fix" things like that. Hell just this week, one of my colleagues "fixed" a failing test by adding delays.

I still think current AI is pretty crap at programming anything non-trivial, but I don't think it necessarily requires fundamental changes to improve.


this whole analogy is so tired. "LLMs are stupid, but some humans are stupid too, therefore LLMs can be smart as well". let's put aside the obvious bad logic and think for one second about WHY some people are better than others at certain tasks. it is always because they have lots of practice and learned from their experiences. something LLM categorically cannot do

Wow so much wrong in such a short comment.

> LLMs are stupid, but some humans are stupid too, therefore LLMs can be smart as well

Not what I said. The correct logic is "LLMs are stupid, but that doesn't prove that they MUST ALWAYS be stupid, in the same way that the existence of stupid people doesn't prove that ALL people are stupid".

> let's put aside the obvious bad logic

Please.

> WHY some people are better than others at certain tasks. it is always because they have lots of practice and learned from their experiences.

What? No it isn't. It's partly because they have lots of practice and learned from experience. But it's also partly natural talent.

> something LLM categorically cannot do

There's literally a step called "training". What do you think that is?

The difference is that LLMs have a distinct off-line training step and can't learn after that. Kind of like the Memento guy. Does that completely rule out smart LLMs? Too early to tell I think.


> There's literally a step called "training". What do you think that is?

oh wow they use the same word so they must mean the same thing! hard to argue with that logic :)


> These kind of tasks ought to be have been automated a long time ago.

People have been trying for literally decades. The problem is that there is just enough uniqueness to every CRUD app that you can't really have "the CRUD app".

I guess it's the sweet spot for AI at the moment because they're 95% all the same but with some fairly simple unique aspects.


I'd be surprised if many normal people pay for this. It's for businesses, who aren't going to pay for sketchy keys. Also businesses generally want the web-based collaboration features. The days of emailing round files are long gone.

Interesting. Businesses I know banned use of anything cloud based, especially if the hosting is owned by the US company.

> However, Groq’s architecture relies on SRAM (Static RAM). Since SRAM is typically built in logic fabs (like TSMC) alongside the processors themselves, it theoretically shouldn't face the same supply chain crunch as HBM. > > Looking at all those pieces, I feel Oracle should seriously look into buying Groq.

I don't see why. Graphcore bet on SRAM and that backfired because unless you go for insane wafer scale integration like Cerebras, you don't remotely get enough memory for modern LLMs. Graphcore's chip only got to 900MB (which is both a crazy amount and not remotely enough). They've pivoted to DRAM.

You could make an argument for buying Cerebras I guess, but even at 3x the price, DRAM is just so much more cost effective than SRAM I don't see how it can make any sense for LLMs.


Forget about DRAM vs. SRAM or whatever: How does a cheaper source of non-Nvidia GPUs help Oracle? They’re not training models or even directly in the inference business. Their pitch is cloud infra for AI, and today that means CUDA & Nvidia or you’re severely limiting your addressable market.

Yeah, some customer would have to commit to renting the chips before Oracle would buy them.

Literally every schema-based serialisation format does this. ASN.1 is a pretty terrible option.

The best system for this I've ever used was Thrift, which properly abstracts data formats, transports and so on.

https://thrift.apache.org/docs/Languages.html

Unfortunately Thrift is a dead (AKA "Apache") project and it doesn't seem like anyone since has tried to do this. It probably didn't help that there are so many gaps in that support matrix. I think "Google have made a thing! Let's blindly use it!" also helped contribute to its downfall, despite Thrift being better than Protobuf (it even supports required fields!).

Actually I just took a look at the Thrift repo and there are a surprising number of commits from a couple of people consistently, so maybe it's not quite as dead as I thought. You never hear about people picking it for new projects though.


FB maintains a distinct version of Thrift from the one they gave to Apache. fbthrift is far from dead as it's actively used across FB. However in typical FB fashion it's not supported for external use, making it open source in name (license) only.

As an interesting historical note, Thrift was inspired by Protobuf.


Very true. ASN.1 is mostly not a great fit, yet has been the choice for everything to do with certificates and telecommunication protocols (even the newer ones like 5G for things like RRC AND NGAP) Mostly for bit-level support and especially long-term stability. * and looking back in time ASN.1 has definetly proven its LTS.

actually never heard of thrift until today, thanks for the insight :)


Honestly, first time I've seen someone praising Thrift in a long time.

Wanted to do unspeakable and evil things to people responsible to choosing it as well as its authors last time I worked on a project that used Thrift extensively.


How come? I haven't used it for like a decade but I remember it being good.

Lot of network issues coming from Thrift RPC runtime apparently not handling anything well.

I recall threatening I'll rewrite everything with ONC-RPC out of pure pettiness and wish to see the network stack not go crazy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: