Hacker Newsnew | past | comments | ask | show | jobs | submit | tbrownaw's commentslogin

Sure it uses a few GB just like everything else these days, but some of the comments also mention it being slow?

The GitHub issue is AI generated. In my experience triaging these in other projects, you can’t really trust anything in them without verifying. The users will make claims and then the AI will embellish to make them sound more important and accurate.

> AI will embellish to make them sound more important and accurate.

Did you mean than accurate rather than and accurate? Having a more accurate issue description only sounds like a good thing to me


Making them look more accurate is not the same as being more accurate, and llms are pretty good at the former.

Imagine a user had a vague idea or something that is broken, then the LLM will choose to interpret his comment for what it thinks is the most likely actual underneath problem, without actually checking anything.


“Seem important and accurate” is correct. It doesn’t imply actual accuracy, the llm will just use figures that resemble an actual calculation, hiding they are wild guesses.

I’ve run into the issue trying to use Claude to instrument and analyze some code for performance. It would make claims like “around 500mb ram are being used in this allocation” without evidence.


I read that as "make them sound more important and accurate than they actually are".

To make them sound more accurate.

Hard /intractable is on an axis orthogonal to philosophical stuff like meaning.

> is solid and well-defined, but may feel unintuitive

I'm thinking that the nature of intuition is about training your neurons to approximate stuff without needing to detour through conscious calculation.

And QM is in too high of a complexity class for this to be a thing.


it's not complexity but lack of training data right

> but given the situation, How does that prevent the situation from happening again

You don't. Instead, you make sure your failover or DR setup is regularly tested and works.


> Seems like it should be somewhat easier to nuke 50 datacenters than it would be to hack and disrupt 1000s of different services.

Previous outage news makes it sound like the cloud providers still have quite a few logical single points of failure.


The claim isn't that the LLMs are democratized. The claim is that LLMs are causing software development to be democratized. As in, people who want software are more able to make it themselves rather than having to go ask the elites for some. As in, the elites in IT now have less power to govern what software other people can have.

(Or alternatively, it's getting harder to stamp out "shadow IT" and all the risks and headaches it causes.)


But the LLMs are quite the opposite: People should not bother with developing software, but ask the big LLM providers to do it for them instead.

In all aspects of the term, software is getting less democratized. But that is in line with a decades long trend, where computers used to ship with BASIC installed and now you need a specialized IDE tool which has a learning curve.

It used to be that you could dabble with HTML but now you need to learn a few javascript frameworks just to modify existing code. You used to start a piece of software by running it, modern server software is a fragile jigsaw that is delivered to production in the cloud. The list goes on. The future we are being promised is that you ask your paid-for development agent to make the necessary changes you require and deliver in to production in the cloud.

Which is fine, in a way, but it shifts power to the professionals. Just as Google, Apple or Microsoft owns your identity and your data, and you pay to use it, they can also decide to deny access for any reason. They are private companies, after all, and it is their data.


If software development were democratized, then decisions that software developers make would be made democratically. On or off the job. On the job, the workplace would be run democratically, instead of as it is now, dictatorially. Or off the job, groups of engineers would be coming together to create governance and make collective decisions about the software they use, like the Debian project or the recent Nix governance. Neither is the case.

Building yourself a table using some new carbon fiber hammer isn't democracy. That's just consumerism.


Hard to state that LLMs "democratize" software development when LLM companies can ban you from software development for any reason or no reason at all, and without recourse of any kind. The HN frontpage currently showcases an Antigravity ban that applied across Gemini, and there's few companies that provide affordable LLM services.

The actual elites greatly extended their control over software development, that's the opposite of democracy


This only remains true so long as open weight models lack significant utility.

Access to compilers was almost as controlled as access to LLMs to prior to the GNU toolchain and Linux putting a C compiler and unix (ish) machine in the hands of anyone who cared for one.


The problem is compute and memory. I think OpenAI bought RAM supply mainly to choke the ability of consumer hardware to run open weight models (that hit the memory bottleneck before other constraints). Now there's a shortage in other components as well. I don't see how local AI can compete in usefulness.

> human-level to run on a single 16GB GPU before the end of the decade.

That's apparently about 6k books' worth of data.


For the weights and temporary state, yes. It doesn't sound like a lot until you remember that your DNA is about 600 books worth of data by the same metric.

How many humans do you know who can recite 6000 books, word for word, exactly?

Most road damage here appears after events of the form "it rained and then the temperature crossed freezing twice a day for a week".

> committed to never train an AI system unless it could guarantee in advance that the company’s safety measures were adequate

That doesn't even make sense.

What stops one model from spouting wrongthink and suicide HOWTOs might not work for a different model, and fine-tuning things away uses the base model as a starting point.

You don't know the thing's failure modes until you've characterized it, and for LLMs the way you do that is by first training it and then exercising it.


This is something they've been working on "in recent months". The Pentagon thing was today.

This cannot have been caused by that, unless they've also invented time travel.


You heard about the Pentagon thing today. Doesn't mean it wasn't started because of political pressure.

9 days ago: https://www.axios.com/2026/02/15/claude-pentagon-anthropic-c...

And I suspect that was not the first time the topic was discussed.


Definitely not the first time. Wall Street Journal reported it back on Jan 29:

https://www.wsj.com/tech/ai/anthropic-ai-defense-department-...


My theory is that Anthropic has been wanting to make this change and doing it now while they’re making a (leaked to the) public stand in the name of ethics was a good opportunity.

Honest question: why have an elaborate theory with no evidence when the simple facts support a much simpler conclusion?

Anthropic is free to do what they want. I can’t imagine the board meeting where this triple bank shot of goading the government into threatening the company to do what they want.


I don't think it's that elaborate. I didn't mean to suggest they intentionally goaded the government into this confrontation. I figure it's a simpler "Oh look, we now have a good opportunity to make that announcement that we were worried about." Considering it's probably the same high-level decision makers on both choices it doesn't need a board meeting. And yes they're absolutely free to do what they want, but they're also not blind to how the public will view their decisions.

> The Pentagon thing was today.

Right because we are 100% aware of everything the pentagon does minute by minute...


It might have been contingency planning: you don't need a weatherman...

Pentagon issue was reported before today. It only made headlines again from Hegseth’s comments.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: