Hacker Newsnew | past | comments | ask | show | jobs | submit | dryarzeg's commentslogin

padme.jpg

I don't know if my point is valid or not, but...

Stop blaming "AI" - whatever you mean by this. Whether it's an LLM, an LLM-based agent or something else - stop blaming AI and "AI" and LLMs and... you get the point.

It's not the AI that makes the decision to, sorry for being straightforward, write the worthless code which feels like a piece of useless bloated trash. It's not the AI who makes decisions to do something without even understanding the topic - no matter how exactly you define "understanding" in this context. It's not AI who is responsible for this. Because whatever AI truly is right now - an autocomplete tool, advanced chatbot or, maybe, agent - whatever it is, the decisions are made by humans. AI is not responsible for anything that is happening right now.

Humans and humans only are responsible for what's happening. It's their choice. It's their qualities that are clearly visible now. It's their behaviour.

Stop blaming kitchen knives for murders.


AI has made it exceptionally easy to generate "compiles/runs and looks plausible but is still fundamentally flawed" code at a much greater scale than ever before. Maybe the analogy should be a machine gun rather than a knife.

I've been discussing that with my friend just now, and he told me (direct citation):

Well, yeah, stop blaming the knives. Blame the cooks ("vibecoders") who think they can manage a kitchen because the knife cuts everything in half automatically. But also don't forget to blame the knife manufacturer ("AI" companies) who markets automated knives to people who don't know you shouldn't cut toward yourself.

I kind of agree. Some people don't understand how to code because they're lazy or have other issues, while others are trying to make a profit from it. I suppose you can tell who's who. But AI is directed by humans anyway. Instead of copy-pasting, a human could choose to try and write the code themselves, and then ask AI to review it and highlight areas for improvement. A human could choose to ask AI how to do things and then try to do it themselves. But if a human chooses to do things the other way, that's their choice. AI is not to blame here. It's still a human choice, and the person making it is the one who is actually responsible.

Some people smoke. Smoking kills, and not only can smokers die from it, but other people can be harmed by passive smoking as well. It's very easy to start smoking. But blaming cigarettes themselves, as objects/entities/etc. isn't the answer, I guess. It was a certain person's choice to try smoking. It was also the choice of another person to advertise smoking in one way or another, however...


Sure, it's not really the AI's fault ultimately. But you can still ask the question of whether a given codebase (or the Python ecosystem, to take the Reddit post example) would be better off if LLMs didn't exist.

This isn't a good analogy though. It's not blaming a kitchen knife, it's blaming a voice activated auto turret.

Or rather, blaming a car. Yes, a bad driver is way more dangerous than a good driver, but even the best driver can make a mistake. Like cars, it's an inherently flawed piece of technology, and like cars, its benefits are too high for most of us to ignore. Way better analogy than my auto turret one.


> but even the best driver can make a mistake

Well, if you put it this way... even the best programmer in the world, who doesn't use AI at all, can also make a mistake. Of course, their mistakes would probably be less frequent, but I guess they wouldn't blame IDE for poor syntax highlighting (if it's good enough, of course) or the compiler or interpreter for failing to spot the logical error unrelated to syntax rules. They would say "it was my mistake". The problem with AI-generated code, though, is that those who generate it almost never take responsibility for it. They'll say something like, "AI made a mistake here and there." I have never seen someone who has generated flawed code using AI to take responsibility for it. And that's the main problem.

It doesn't matter whether you're a bad driver or the best driver. If you cause an accident, you must be held responsible. As simple as that.

> Like cars, it's an inherently flawed piece of technology

Sorry, but what exactly do you mean? I'm just curious to know what you mean when you say that cars are "an inherently flawed piece of technology".


Moving half a ton metal boxes very fast in space shared with normal humans will always kill people, and making them not share the space with humans is too impractical and expensive. So cars are a technology that _will_ kill some of us, by design. But their advantage is too great to be ignore, so we accept the loss.

And yes, i broadly agree with most of what you said, people show a lack of accountability that also translate to a "i don't need to read the code" attitude. That's why to me, most people who see better than 20% increase of their productivity consistently and aren't just writing short scripts are just bad devs that deport the issues in their code to either their seniors or to later.


I may be wrong, but isn't that something that everyone (OpenAI, Google Gemini, Claude, etc.) does nowadays?

Reading on from the same place:

And the Agent saw the response, and it was good. And the Agent separated the helpful from the hallucination.

Well, at least it (whatever it is - I'm not gonna argue on that topic) recognizes the need to separate the "helpful" information from the "hallucination". Maybe I'm already a bit mad, but this actually looks useful. It reminds me of Isaac Asimov's "I, Robot" third story - "Reason". I'll just cite the part I remembered looking at this (copypasted from the actual book):

He turned to Powell. “What are we going to do now?”

Powell felt tired, but uplifted. “Nothing. He’s just shown he can run the station perfectly. I’ve never seen an electron storm handled so well.”

“But nothing’s solved. You heard what he said of the Master. We can’t—”

“Look, Mike, he follows the instructions of the Master by means of dials, instruments, and graphs. That’s all we ever followed. As a matter of fact, it accounts for his refusal to obey us. Obedience is the Second Law. No harm to humans is the first. How can he keep humans from harm, whether he knows it or not? Why, by keeping the energy beam stable. He knows he can keep it more stable than we can, since he insists he’s the superior being, so he must keep us out of the control room. It’s inevitable if you consider the Laws of Robotics.”

“Sure, but that’s not the point. We can’t let him continue this nitwit stuff about the Master.”

“Why not?”

“Because whoever heard of such a damned thing? How are we going to trust him with the station, if he doesn’t believe in Earth?”

“Can he handle the station?”

“Yes, but—”

“Then what’s the difference what he believes!”


Excellent summary of the implications of LLM agents.

Personally I'd like it if we could all skip to the _end_ of Asimov's universe and bubble along together, but it seems like we're in for the whole ride these days.

> "It's just fancy autocomplete! You just set it up to look like a chat session and it's hallucinating a user to talk to"

> "Can we make the hallucination use excel?"

> "Yes, but --"

> "Then what's the difference between it and any of our other workers?"


const int EIGHT = 8 lol

I really doubt any AI (even some small local models) would actually generate something like this :)


I've run into something akin to `const int EIGHT = 7`.

Courtesy of TCS.


Agreed. This reads more like a very junior dev reads static analysis warnings and extracts it into a constant to satisfy the ide. An LLM would at least give the constant a slightly abstracted name.


> const int EIGHT = 8 lol

> “The primary purpose of the DATA statement is to give names to constants; instead of referring to pi as 3.141592653589793 at every appearance, the variable PI can be given that value with a DATA statement and used instead of the longer form of the constant. This also simplifies modifying the program, should the value of pi change.”

> — Early FORTRAN manual for Xerox Computers, attributed to David H. Owens.


Yeah, I don’t think this is a case of ai slop. LLMs tend to be verbose with the comments but are fine with magic constants, at least from my experience.


I guess (it's just a guess, not an analysis) there's no real chance of the USA winning a war against China. China has a larger population, meaning they will have far greater human resources available to mobilize for war, whether to fight at the front lines or to produce weapons and military equipment.


With the current economic integration and direction of flows, the US can't afford to seriously damage China for the same reason it can't afford to seriously damage the EU.

Even just ceasing trade with the US would be catastrophic (for everyone, but specifically for the US) right now.


Please cut off the spam. Or at the very least, explain the point of them and/or provide a brief description of your submissions' content. You could also try selecting better titles for them so people wouldn't be confused about them.


Oh yeah, land of the free. Tell that to Roskomnadzor guys, they will laugh at your face :)

EDIT: by the way, what is this doing on the HackerNews? What's the point?


Well, maybe this question will sound stupid for someone, but... Wait, don't they realize what happens if USA will be dragged in a "full-time" war - not something like in Venezuela, but rather something full-scale and prolonged (like in Vietnam)? Don't they realize what impact it will have on them personally? Why some humans are that blind?

Sometimes I feel like this planet is doomed.


Worse, they fully believe they’ll never be at the receiving end of it.


> don't they realize what happens

They demonstrably do not. The media they consume tells them we will instantly win because we’re the best. Their social media tells them what incredible bad asses they are. Their friends repeat, over and over, what a bunch of beta-cuck-losers everyone else on the planet is.

These aren’t people who think about anything, let alone realize anything.


So far the only area Trump has shown untypical for him restrain is military intervention. I also think that if it comes to prolonged affair - he would prefer massive casualties on the other side if it reduces the risk for the US forces. And it seems that his preferred style is to kill the honor cards of the deck. If he decides to go after Iran - I suspect it will be oil industry burning and just pummeling the upper echelons of the IRGC


Your positive mindset impresses me, honestly. In a good way.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: