Is there more context to this? I'm assuming Ben is experimenting and demonstrating the danger of vibe circuit designing? Mostly because I know he has a ton of experience and I'd expect him to not make this mistake (also seems like he told the AI why it was wrong)
I'm not sure, it was posted on HN a couple weeks ago with the same title as the text in his tweet. I'd guess he was experimenting and trying to show the dangers, like you suggested.
People make these mistakes too. Several times in my high school shop class kids shorted out 9V batteries trying to build circuits because they didn't understand how electronics work. At no point did our teacher stop them from doing so - on at least one occasion I unplugged one from a breadboard before it got too toasty to handle (and I was/am an electronics nublet). Similarly there was also a lot of hand-wringing about the Gemini pizza glue in a world where people do wacky stuff like cook fish in a dishwasher or defrost chicken overnight on the counter or put cooked steak on the same plate it was on when raw just a few minutes prior.
LLMs are just surfacing the fact that assessing and managing risk is an acquired, difficult-to-learn skill. Most people don't know what they don't know and fail to think about what might happen if they do something (correctly or otherwise) before they do it, let alone what they'd do if it goes wrong.
> Several times in my high school shop class kids shorted out 9V batteries trying to build circuits because they didn't understand how electronics work. At no point did our teacher stop them from doing so
Yes, and that's okay because the classroom is a learning environment. However, LLMs don't learn; a model that releases the magic smoke in this session will be happy to release it all over again next time.
> LLMs are just surfacing the fact that assessing and managing risk is an acquired, difficult-to-learn skill.
Which makes the problem worse, not better. If risk management is a difficult skill, then that means we can't extrapolate from 'easy' demonstrations of said skill to argue that an LLM is generally safe for more sensitive tasks.
Overall, it seems like LLMs have a long tail of failures. Even while their mean or median performance is good, they seem exponentially more likely than a similarly-competent human to advise something like `rm -rf /`. This is a deeply unintuitive behaviour, precisely because our 'human-like' intuition is engaged with resepct to the average/median skill.
Well said, but I'd add that LLMs are also surfacing the fact that there's a swathe of people out there who will treat the machines as more trustworthy than humans by default, and don't believe they need to do any assessment or risk management in the first place.
People are just lazy. It’s got nothing to do with LLMs having more trust because they’re a machine because most people would happily trust their friend over an expert. They’d trust the first blog post they find online over an expert. Most people are just too lazy and not skilled enough to perform independent review.
And to be fair to those people, coming to topics with a research mindset is genuinely hard and time consuming. So I can’t actually blame people for being lazy.
All LLMs do is provide an even easier way to “research”. But it’s not like people were disbelieving random Facebook posts, online scams, and word-of-mouth before LLMs.
As right as this may be, it elides the crucial difference between asking LLMs and all the other methods of asking questions you enumerated. The difference is not between the quality of information you might get from a friend or a blog versus an LLM. The difference is the centralization and feeding of the same poor quality information to massive numbers of people at scale. At least whatever bonkers theory someone "researches" on their own is going to be a heterodox set of ideas, with a limited blast radius. Even a major search engine up-ranking a site devoted to, like, how horse dewormers can cure covid, doesn't present it as if that link is the answer to how to cure covid, right? LLMs have a pernicious combination of sounding authoritative while speaking gibberish. Their real skill is not in surfacing the truth from a mass of data, it's in presenting a set of assertions as truth in a way that might satisfy the maximum number of people with limited curiosity, and in establishing an artificial sense of trust. That's why LLMs are likely the most demonic thing ever made by man. They are machines built to lie, tell half-truths, obfuscate and flatter at the same time. Doesn't that sound enough like every religion's warning about the devil?
But nothing has changed there. People have been posting intelligent-sounding gibberish on social media and blogs for years before LLMs.
The problem with centralisation isn’t that it gobbles up data. It’s that it allows those weights to be dictated by a small few who might choose to skew the model more favourably to the messaging they’ve want to promote.
And this is a genuine concern. But it’s also not a new problem either. We already have that problem with new broadcasters, newspaper publications, social media ethics teams, and so on and so forth.
The new problem LLMs bring to human interaction isn’t any of the issues described above. It’s with LLMs replacing human contact in situations where you need something with a conscience to step in.
For example, conversations leading to AI promoting negative thoughts from people with mental health problems because the chat history starts to overwhelm the context window, resulting in the system prompt doing a poorer job of weighting the conversation away from dangerous topics like suicide.
This isn’t to say that the points which you’ve addressed aren’t real problems that exist. They definitely do exist. But they’ve also always existed, even before GPT was invented. We’ve just never properly addressed those problems because:
either there’s no incentive to. If you are powerful enough to control the narrative then why would you use that power to turn the narrative against you?
…or there simply isn’t a good way of solving that problem. eg I might hate stupid conspiracy theories, but censoring research is a much worse alternative. So we just have to allow nutters to share their dumb ideas in the hope that enough legitimate research is published, and enough people are sensible enough to read it, that the nutters don’t have any meaningful impact on society.
The AI is being sold as an expert, not a student. These are categorically different things.
The mistake in the post is one that can be avoided by taking a single class at a community college. No PhD required, not even a B.S., not even an electricians certificate.
So I don't get your point. You're comparing a person in a learning environment to the equivalent of a person claiming to have a PhD in electrical engineering. A student letting the magic smoke escape from a basic circuit is a learnable experience (a memorable one that has high impact), especially when done in a learning environment where an expert can ensure more dangerous mistakes are less likely or non existent. But the same action from a PhD educated engineer would make you reasonably question their qualifications. Yes, humans make mistakes but if you follow the AI's instructions and light things on fire you get sued. If you follow the engineer's instructions and set things on fire then that engineer gets fired likely loses their license.
Lawyers are getting in trouble because they use AI and submit fabricated citations about fabricated cases as precedent. A bunch of charges were recently thrown out in Wisconsin because of this, and it's not the first time such behavior has made the news.
The real analog here would be an electronics teacher leading his students to create a circuit that caught fire. If you’re confidently giving faulty information to people that don’t know any better, you’re not teaching them.
I am sure this is true. On the flip side, as someone who is addicted to learning, I've been finding LLMs to be amazing at feeding my addiction. :)
Some recent examples:
* foreign languages ("explain the difference between these two words that have the same English translation", "here's a photo of a mock German exam paper and here is my written answer - mark it & show how I could have done better")
* domains that I'm familiar with but might not know the exact commands off the top of my head (troubleshooting some ARP weirdness across a bunch of OSX/Linux/Windows boxes on an Omada network)
* learning basic skills in a new domain ("I'm building this thing out of 4mm mild steel - how do I go about choosing the right type of threading tap?", "what's the difference between Type B and Type F RCCB?")
Many of these can be easily answered with a web search, but the ability to ask follow-up questions has been a game changer.
I'd love to hear from other addicts - are there areas where LLMs have really accelerated your learning?
Hah, yesterday I was discussing solar panels and moving shadows. I would have wasted money buying a commercial solar panel if I didn’t have this chat.
Learned a lot on how it works, to the point I’m confident that I can go the DIY route and spend my money in AliExpress buying components instead.
Why not ask a pro solar panel installer instead? I live in an apartment, of course they would say it’s not possible to place a solar panel on my terrace. I don’t believe in things not being possible.
But I had two semesters of electronics/robotics in my CS undergrad and I know to not to trust the LLM blindly and verify.
I'm of a similar mind but I think you also need to be careful. I find that people are more willing to believe a chatbot than a search result simply due to the way the information is presented. But if you're thinking "but search results can be wrong too!" then that's exactly my point. The problem is quite similar to people "doing their own research". I'm sure conspiracy theorists do a lot of reading, a lot of searching, and all that cargo cult research stuff. But I say cargo cult because it has all the form of research but none of the substance. That doesn't mean using LLMs is exclusive cargo cult learning but it is also easy to fall into a trap of that, and I'd argue easier than it is to fall into cargo cult learning by searching, which is easier to fall into cargo cult learning than by reading books, which is easier than being in a university lecture. Doesn't mean the tools are bad, but that it's easy to fool ourselves.
Basically if you can't differentiate how your typical conspiracy theorist isn't researching then you're at greater risk. It's worth thinking about that question, as they do do a lot of reading, thinking, and looking things up. It's more subtle, right?
FWIW, a thing I find LLMs really useful for is learning the vernacular of fields I'm unfamiliar or less familiar with. It is especially helpful when searches fail due to overloaded words (and let's be honest, Google's self elected lobotomy), but it is more a launching point. Though this still has the conspiracy problem as it is easy to self-reinforce a belief and not considering the alternatives. Follow-up questions are nice and can really help sifting through large amounts of information, but they certainly have a preference to narrow the view. I think this makes learning feel faster and more direct but have also taught (at the university level) I think it is important to learn all the boring stuff too. That stuff may not be important "now" but a well organized course means that that stuff is going to be important "soon" and "now" is the best time to learn it. No different than how musicians need to practice boring scales and patterns, athletes need to do drills and not just learn by competing (or "simulated" computations), or how children learn to write by boringly writing shapes over and over. I find the LLMs like to avoid the boring parts.
I agree, I always ask to know more if I don’t get it or it’s a new subject. But I think we’re in the minority, it’s easier to just accept the answer and move on, it requires very little effort compared to trying to understand and retain.
Just because a calculator will only ever be used by a subset of the population to type 80085 and giggle, doesn't mean it can't also be used for complex calculations.
AI is a tool that can accelerate learning, or severely inhibit it. I do think the tooling is going to continue to make it easier and easier to get good output without knowing what you're doing, though.
> Just because a calculator will only ever be used by a subset of the population
I'm not sure what your argument is here. I think everyone knows this but also recognizes that the vast majority of people are not using calculators in that way. The vast majority of people are using calculators to replace calculation.
I'll give an example. I tell people I tip by: round the decimal, divide by 10, multiply by 2. Nearly every time I say that people tell me it is too difficult. This includes people with PhD STEM educations...
Hearing these stories (and I hear them more than I would like) is mind boggling to me. As someone who’s quite bad at math, doing what you describe is insanely basic stuff, anyone in a developed country with access to school should be able to do that.
It will be hard to convince me those people are using a LLM to learn.
That's a very strong claim. I don't think people expect their circuits to ignite, LLM instruction or not. But I'd expect learning from a book or dedicated website would be less likely for that to occur. (Even accounting for bad manufacturing)
You're biased because you're not considering that by definition the student is inexperienced. Unknown unknowns. Tons of people don't know very basic things (why would they?) like circuits with capacitors bring dangerous when the power is off.
Why are you defending there LLM? Would you be as nice to a person? I'd expect not because these threads tend to point out a person's idiocy. I'm not sure why we give greater leeway to the machine. I'm not sure why we forgive them as if they are a student learning but someone posting similar instructions on a blog gets (rightfully) thrashed. That blog writer is almost never claiming PhD expertise
I agree that LLMs can greatly aid in learning. But I also think they can greatly hinder learning. I'm not sure why anyone thinks it's any different than when people got access to the internet. We gave people access to all the information in the world and people "do their own research" and end up making egregious errors because they don't know how to research (naively think it's "searching for information"), what questions to ask, or how to interrogate data (and much more). Instead we've ended up with lots of conspiratorial thinking. Now a sycophantic search engine is going to fix that? I'm unconvinced. Mostly because we can observe the result.
> We gave people access to all the information in the world and people "do their own research" and end up making egregious errors because they don't know how to research (naively think it's "searching for information"), what questions to ask, or how to interrogate data (and much more).
You pin pointed a major problem with education, indeed. Personally, I think 3 crucial courses should be taught in school to mitigate that: 1) rational thinking 2) learning how to learn 3) learning how to do a research.
I think so too, but I also think this is part of the failure of math and science education. That is exactly what those topics are. But many courses will focus on the facts and not the substance. Instead of linking Feynman's Cargo Cult Science, which also broaches this I'll link this one instead[0] as I think it better illustrates what I'm saying.
In Norway we eat plenty of salmon which is quite raw or even raw (in sushi). It has to be frozen and thawed first, to kill parasites.
A friend that studied fish production did recommend not eating salmon though and eating trout instead (ørret in Norwegian). Based on scientific evidence difference is pretty small (15% fish not surviving for salmon vs 12% for trout). But rainbow trout does have more DHA per kg.
The difference is that LLMs pretend to be experts on all things. The high school shop kids aren’t under the impression they can build a smart toaster or whatever.
The people doing these kickstarters are outsourcing the work because they can’t do it themselves. If they use an LLM, they don’t know what to look for or even ask for, which is how they get these problems where the production backend uses shared credentials and has no access control.
The LLM got it to “working” state, but the people operating it didn’t understand what it was doing. They just prompt until it looks like it works and then ship it.
This took me a while(I'm slow), but I think GP is saying: "I've seen enough of (expressions) thinking ideas is the key when engineering was; with everyone snorting LLMs, we'll see that replicating in software world" but nicely.
THAT makes sense. Engineering was never cheap nor non-differentiating if normalized by man-hours, only when it was USD normalized. If a large enough number of people were to get the same FALSE impression that software and firmware parts are now basically free and non-differentiating commodities, then there will be tons of spectacular failures in software world in coming years. There has already been early previews of those here.
I’m following exactly, but the parent commenter is off on a tangent unrelated to the topic.
We’re not taking about the parent commenter, we’re talking about unskilled Kickstarter operators making decisions. Not a skilled programmer using an LLM.
> they'd rather vibe code themselves than trust an unproven engineering firm
You could cut the statement short here, and it would still be a reasonable position to take these days.
LLMs are still complex, sharp tools - despite their simple appearance and proteststions of both biggest fans and haters alike, the dominating factor for effectiveness of an LLM tool on a problem is still whether or not you're holding it wrong.
LLMs definitely write more robust code than most. They don't take shortcuts or resort to ugly hacks. They have no problem writing tedious guards against edge cases that humans brush off. They also keep comments up to date and obsess over tests.
> They don't take shortcuts or resort to ugly hacks.
That hasn't, universally, been my experience. Sometimes the code is fine. Sometimes it is functional, but organized poorly, or does things in a very unusual way that is hard to understand. And sometimes it produces code that might work sometimes but misses important edge cases and isn't robust at all, or does things in an incredibly slow way.
> They have no problem writing tedious guards against edge cases that humans brush off.
The flip side of that is that instead of coming up with a good design that doesn't have as many edge cases, it will write verbose code that handles many different cases in similar, but not quite the same ways.
> They also keep comments up to date and obsess over tests.
Sure but they will often make comments or tests that aren't actually useful, or modify tests to succeed instead of fixing the code.
One significant danger of LLMs is that the quality of the output is higly variable and unpredictable.
That's ok, if you have someone knowledgeable reviewing and correcting it. But if you blindly trust it, because it produced decent results a few times, you'll probably be sorry.
> Sure but they will often make comments or tests that aren't actually useful, or modify tests to succeed instead of fixing the code.
I've been deeply concerned that there's been a rise of TDD. I thought we already went through this and saw its failure. But we're back to we're people cannot differentiate "tests aren't enough" from "tests are useless". The amount of faith people put into tests is astounding. Especially when they aren't spending much time analyzing the tests and understanding their coverage.
I had 5.3-Codex take two tries to satisfy a linter on Typescript type definitions.
It gave up, removed the code it had written directly accessing the correct property, and replaced it with a new function that did a BFS to walk through every single field in the API response object while applying a regex "looksLikeHttpsUrl" and hoping the first valid URL that had https:// would be the correct key to use.
On the contrary, the shift from pretraining driving most gains to RL driving most gains is pressuring these models resort to new hacks and shortcuts that are increasingly novel and disturbing!
> They don't take shortcuts or resort to ugly hacks.
My experience is quite different
> They have no problem writing tedious guards against edge cases that humans brush off.
Ditto.
I have a hard time getting them to write small and flexible functions. Even with explicit instructions about how a specific routine should be done. (Really easy to produce in bash scripts as they seem to avoid using functions, but so do people, but most people suck at bash) IME they're fixated on the end goal and do not grasp the larger context (which is often implicit though I still find difficulty when I'm highly explicit. Which at that point it's usually faster to write myself)
It also makes me question context. Are humans not doing this because they don't think about it or because we've been training people to ignore things? How often do we hear "I just care that it works?" I've only heard that phrase from those that also love to talk about minimum viable products because... frankly, who is not concerned if it works? That's always been a disagreement about what is sufficient. Only very junior people believe in perfection. It's why we have sayings like "there's no solution more permanent than a temporary fix that works". It's the same people who believe tests are proof of correctness rather than a bound on correctness. The same people who read that last sentence and think I'm suggesting to not write tests or believe tests are useless.
I'd be concerned with the LLM operator quite a bit because of this. Subtle things are important when instructing LLMs. Subtle things in the prompts can wildly change the output
The discourse around LLMs has created this notion that humans are not lazy and write perfect code. They get compared to an ideal programmer instead of real devs.
LLM's at best asymptotically approach a human doing the same task. They are trained on the best and the worst. Nothing they output deserves faith other than what can be proven beyond a shadow of a doubt with your own eyes and tooling. I'll say the same thing to anyone vibe coding that I'd say to programmatically illiterate. Trust this only insofar as you can prove it works, and you can stay ahead of the machine. Dabble if you want, but to use something safely enough to rely on, you need to be 10% smarter than it is.
> They don't take shortcuts or resort to ugly hacks.
In my experience that is all they do, and you constantly have to fight them to get the quality up, and then fight again to prevent regressions on every change.
What? Yes they do take shortcuts and hacks. They change the tests case to make it pass. As the context gets longer it is less reliable at following earlier instructions. I literally had Claude hallucinate nonexistent APIs and then admitted “You caught me! I didn’t actually know, let me do a web search” and then after the web search it still mixes deprecated patterns and APIs against instructions.
I’m much more worried about the reliability of software produced by LLMs.
> LLMs definitely write more robust code than most.
I’ve been using Opus 4.6 and GPT-Codex-5.3 daily and I see plenty of hacks and problems all day long.
I think this is missing the point. The code in this product might be robust in the sense that it follows documentation and does things without hacks, but the things it’s doing are a mismatch for what is needed in the situation.
It might be perfectly structured code, but it uses hardcoded shared credentials.
A skilled operator could have directed it to do the right things and implement something secure, but an unskilled operator doesn’t even know how to specify the right requirements.
They've been losing ground to placebo in more recent research.
Plus, most of the more serious side effects take a lot more time to manifest than the typical length any given patient remained in the older clinical trials that secured FDA approval and grounded the official manufacturer literature.
I am glad we have these tools, but I suspect they are vastly overused, and patients not well informed.
On what basis do you call it "slop"? I would agree that it's not much relevant to HN, but it seems to report that the number is at odds with most other figures.
The bar for submitting something to HN is quite low. (Just make an account. I think that's it - I don't think you need any particular karma level.) So, yes, you can get "slop" here - off topic, shilling, trolling, and just generally low-quality stuff. And lots of off topic.
And I generally oppose off topic stuff! But this story has kind of died out in the mainstream press, and I think it's a really important story. (But then, I suppose everybody who posts off topic stuff thinks that theirs is a really important story...)
As I get older, I read this entirely differently (as an appeal to empathy) than I did when I was younger (as an appeal to stolidity).
In other words, you should be pained for your neighbor when his slave breaks his cup. Maybe his grandmother left him that cup, and he's developed many fond memories around which he drank a soothing beverage in that heirloom. That empathy how we connect with people, build meaning, and make life richer.
My initial reaction was to disagree, but the man did allegedly take in an abandoned infant. And a woman to care for it[1]. And, our readings[2] of that quote (acceptance vs altruism) aren't in any way incompatible.
[1] You absolutely don't want to be a single woman in 1st century AD.
From just the quote above, I understand more as something intermediate : don't be pained when your cup is broken, like if it it was the cup of some else but be pained when someone else wife or child die, like if it was your
There will always be enemies and corrupt people. We need to establish a system of government and culture that doesn't so easily give over the reigns of the nation to these bad actors. If we don't actively do this we will long for the good old days when the corrupt leaders just wanted to steal money for themselves and hurt trans people.
for what is Maduro an enemy of the US. He wasn't willing to sign over the oil reserves to US oil companies. wanting to keep what is theirs away from rapacious foreign invaders would make most of the planet an enemy of the US.
I doubt Vance is capable of getting the support from Congress and the maga voters that Trump has. Once Trump is gone, the Republican party is going to have a hard time putting itself back together.
Meanwhile, the individual upthread suggesting they’d support a foreign power invading the US and capturing Trump is the ridiculous, childish, and deeply unserious brand of self-loathing that we are voraciously (and necessarily, if our country is to survive) opposed to.
You, personally, might, but I think it's going to be a clusterfuck. You can't stick a different person in a cult of personality and expect it to act the same.
reply