Hacker Newsnew | past | comments | ask | show | jobs | submit | jodrellblank's commentslogin

You think an artificial intelligence would have less impact on the world than the steam engine?

The AI commentators are not saying that ELIZA will change the world, they’re saying that one of the big companies is moments away from an AGI. Sam Altman called a recent ChatGPT model a “PhD level expert”; wouldn’t infinite PhDs for $20/month or $200/month be transformative?

That is, your objection isn’t the usual “LLMs aren’t going to be AGI”, you’re saying “even if they do, it won’t be a big deal”?


>You think an artificial intelligence would have less impact on the world than the steam engine?

Not op, but yes, 100%. Steam backs nearly all development of technology of the last 150+ years. Where do you think the power come from to make things? More than half of the world's power *still* runs on steam, as will many of the systems running AI.

If steam power never existed, not only would you not exist but there's a good chance the country you live in wouldn't either. If you don't believe the effect is large, go to the farthest uncontacted place on earth and take out a CO2 meter.


> "If you don't believe the effect is large"

It's not that I "don't believe the effect is large", but the changes from pre-intelligence planet Earth to post-intelligence planet Earth are larger because they include the invention of steam, and literally everything else too: language, writing, irrigation, cities, trade, numbers, currency, mathematics, chemistry, engineering, nations, governments, supply chains, steam, etc.

An AGI that can solve the problems we think are solvable, but we can't solve, would be huge. Any sci-fi idea that isn't ruled out by the laws of physics, but that we haven't got the brains to solve, any breakthrough that we think should be there but we haven't found, any problem that requires too much time to learn, or too many parts to hold in one human mind, any coordination that is too big for one team, any funding problem, any scarcity problem, any disease or illness problem, any long timeframe problem, are all on the table as possibilities.


There's potential there (with the pocket-PhDs), the question is whether it'll actually make a measurable difference in the long run. I mean I'm sure it will make a difference, the question is whether it's what they say it will be, and whether it'll be financially viable. At the current burn rate of the AI companies, it isn't - before long the first ones will have to give up. They won't die, they'll be subsumed into their competitors.

Anyway, the challenge is making a difference. Current-day LLMs can, for example, generate stories and books; one tweet said "this can generate 1000 screenplays a day". Which sounds impressive by the numbers, but books, screenplays, etc were never about volume.

Same with PhDs - is there a shortage of them? Does adding potentially infinite PhDs (whatever they are) to a project make it better, or does it just make... more?

This is the main difference with the industrial revolution - it, for example, introduced machines that turned 10 people jobs into 1 person jobs. I don't think LLMs will do something like that, it'll just output 10 people's worth of Stuff that will need some use.

I don't think anyone ever asked for 1000 screenplays a day, or infinite PhD's for $20. But then, nobody asked for a riderless carriage yet here we are.


> "I don't think LLMs will do something like that, it'll just output 10 people's worth of Stuff that will need some use."

This is why I said "isn’t the usual “LLMs aren’t going to be AGI”", but you still went straight for "LLMs aren't AGI", which was not in question.

AGI is what OpenAI says they are going for. That's the goal of all this trillion dollar investment, not to output 1000 screenplays a day, but to takeover the world, basically. What would infinite PhDs discover if they could hold all of Arxive in their 'heads' at once and see patterns in every experiment that's ever been done? What could they engineer and manufacture if they could 'concentrate' on millions of steps of a manufacturing process at once without getting fatigued or bored? What ideas could they test if they could be PhD level in a dozen subjects all at once?


> Same with PhDs - is there a shortage of them? Does adding potentially infinite PhDs (whatever they are) to a project make it better, or does it just make... more?

Yes, there is still a large demand for people with analytical thinking, a deep knowledge base, and good problem-solving skills. This demand shows up broadly across STEM fields, and it's a major reason that these fields pay relatively high.

Even just thinking of R&D, there is an immense amount of work left to be done in basic science. Research is throttled partly by a lack of cheap graduate lab labor. (If that physical + mental labor became much cheaper, the costs of research would shift - what does it take to get reagants? What does it take to build more lab space, and provide water and light? Etc.)

The present issue is that current AI does not really offer the same capabilities as a good grad student or PhD. Not just physically, as in, we don't have good robotics yet, but mentally. LLMs do not exhibit good judgment or problem-solving skills, like a good PhD does. And they don't exhibit continual learning.

No clue on when these will change, but yes, a cheap AI with solid problem-solving skills and good judgment would absolutely upend our economy.


An actual artificial intelligence? Yes, total paradigm shift. Not even a shift, we'd launch the old paradigm into the sun.

LLMs and modern day """AI"""? Don't kid yourself.


When I got to “the initial triage was frustrating; the report was dismissed as "Intended Behavior”” I thought well there’s no need to follow ‘responsible disclosure’ then, eh?

I would have been tempted to blog about it immediately. Companies already get a sweet deal by people finding bugs for free, reporting them for free, and voluntarily keeping quiet about them for free; researchers shouldn’t also have to fight to report problems (for free).


Budget "realities" that somehow don't affect Mexico, only the USA?

You weren't concerned by Tesla recalling 5.1 million cars in 2024, more than Ford, and recalling the Cybertruck 5 times in 18 months[1]?

You weren't concerned by Consumer Reports ranking Tesla in last place out of 26 manufacturers for reliability[2]? Or their quality control so poor that customers are buying their own delivery inspection checklists[2]?

You weren't concerned by Tesla Owners reporting Tesla authorized repair centers keeping their cars for months unable to source parts to repair them, or consumers unable to buy replacement parts[3]?

You weren't concerned by the worker awarded $130M for hostile work environment filled with racial abuse at a Tesla factory, after paying another worker $1M for racist abuse at their factory, and facing a class action about racist discrimination[4][5][6]?

You weren't concerned about Tesla's telemetry tracking all details about every drive and sending it back to HQ to train their FSD?

You weren't concerned by any of Musk's behaviour, such as his misleading statements about FSD delivery dates and abilities for years, about the sportscar that would jump with compressed air, about the Cybertruck and Semi truck abilities, about the Hyperloop, about getting Tesla to buy his cousin's failing SolarCity, about trying to get a trillion dollar paycheck out of Tesla, etc?

[1] https://www.carscoops.com/2025/04/which-tesla-models-have-ha...

[2] https://www.carscoops.com/2025/04/which-tesla-models-have-ha...

[2] https://www.jalopnik.com/teslas-quality-control-is-so-bad-cu...

[3] https://www.teslaownersonline.com/threads/tesla-cant-supply-...

[4] https://www.business-humanrights.org/en/latest-news/usa-form...

[5] https://www.independent.co.uk/news/world/americas/tesla-laws...

[6] https://edition.cnn.com/2025/04/18/business/tesla-black-work...


To be honest I haven't really kept up that closely until recently when I started looking for a new car but that is concerning!

China has a peopled space station in orbit right now, a planned human landing on the moon in 2030, and has been deploying moon orbit relay satellites, moon rovers, returning moon samples to Earth, for a future moon base in the 2030s.

That would be compatible with them carrying umbrellas; https://www.etymonline.com/word/umbrella

> "pure HTTPS port 443 -- you literally can't block it without breaking the web."

Sure you can, you do Man In The Middle certificate inspection and then filter it aggressively like it was HTTP; that's the product companies like ZScaler offer, and basically any business/enterprise firewall device - internet filtering to protect your company and prevent or detect data exfiltration and malicious activity. Or perhaps you could say that does 'break the web' but companies do it anyway and pay a lot of money so they can do it. (ZScaler is a $23Bn market cap company).


You are setting up to say "I wouldn't tolerate that" for any example given, but if you look at the market and what makes people actually leave, instead of what makes people complain, then basically anything that isn't life-and-death, safety critical, big-money-losing, or data corrupting is tolerable. There's plenty of complaints about Microsoft, Apple, Gmail, Android, and all kinds of 3rd party niche business systems.

[Edit: DanLuu "one week of bugs": https://danluu.com/everything-is-broken/ ]

All the decades people tolerated blue-screens on Windows. All the software which regularly segfaulted years ago. The permeation of "have you tried turning it off and on again" into everyday life. The "ship sooner, patch later" culture. The refusal to use garbage collected or memory managed languages or formal verification over C/C++/etc because some bugs are more tolerable than the cost/effort/performance costs to change. Display and formatting bugs, e.g. glitches in video games. When error conditions aren't handled - code that crashes if you enter blank parameters. Bugs in utility code that doesn't run often like the installer.

One software I installed yesterday told me to disable some Windows services before the install, then the installer tried to start the services at the end of the install and couldn't, so it failed and finished without finishing installing everything. This reminded me that I knew about that, because that buggy behaviour has been there for years and I've tripped over it before; at least two major versions.

Another one I regularly update tells me to close its running processes before proceding with the install, but after it's got to that state, it won't let me procede and it has no way to refresh or rescan to detect the running process has finished. That's been there for years and several major versions as well.

One more famous example is """I'm not a real programmer. I throw together things until it works then I move on. The real programmers will say "Yeah it works but you’re leaking memory everywhere. Perhaps we should fix that." I’ll just restart Apache every 10 requests.""" - Rasmus Lerdorf, creator of PHP. I've a feeling that was admitted about 37 Signals and Basecamp, it was common to restart Ruby-on-Rails code frequently, but I can't find a source to back that up.


> "Or a crane that will stall and drop its load randomly. It would have been sent to the scrapyard on the first day."

The only reason you have the concept that engines can "stall" is because people have bought engines that can stall by the hundreds of millions, instead of the earliest people refusing to buy them at all and all waiting for the perfect engine.

Container ships can sink with all the containers lost at sea. Still used.

Steam train engines could explode, derailing the train and killing some passengers and employees. Still used.

Buildings can collapse. Still used.

Pneumatic tyres can burst. Still used.

Here[1] is Tom Scott using a recreation walking crane from the 13th century, a technology going back to Roman times, which has no evidence that it ever had brakes on it historically. Look at that and tell me you think the rope never snappped, the wood never broke, the walker never tripped and the thing never unreeled the load back to the ground with the walker severely injured, because if it went wrong builders would refuse to use it? No chance.

Nothing functions like you're claiming; that's where we get the saying "don't let perfect be the enemy of good enough", as soon as stuff is better than not having it, people want to make use of it.

[1] https://www.youtube.com/watch?v=pk9v3m7Slv8


You forgot to address the random aspect of the failure cases.

Real world is chaotic, technology was always first about controlling, then improving said control. A lot of the risks in the situations you described have been brought down that the savings (time, money,…) are magnitude more than the cost of the failure.

I’m not asking for perfection, but something good enough that we can demonstrate the savings outweigh the costs. So far there’s none. In fact, we are increasing it. And fast.


Which part of "lifetime of hard work and dedication" are you misreading as "so easy"?

There is zero, absolutely zero chance of the 50th percentile IQ becoming a world class mathematician. People who say this have no idea exactly how smart these guys are.

It seems like a bit of a pointless and unanswerable argument about semantics, the only bit is the irritating "ohh if it's SOOO EASY" about something that was definitely framed not to be easy.

If your cutoff of "world class mathematician" is a few hundred or thousand people, then no chance. If their cutoff is "earn a comfortable living" and the top 10% of the world is 800,000,000 people most of whom don't study mathematics, then can an average intellect with an obsession for math end up working a job a normal person might call 'mathematician' by working on AutoCAD or 3D rendering game engine or industrial statistics and process control or economics or vehicle aerodynamics and be in the top 10% of the world in mathematical ability? Possibly yes. And you can adjust the numbers and criterion to get a yes or no whichever way you like.


>you can adjust the numbers and criterion to get a yes or no whichever way you like.

Good idea, I'll do that :)

>can an average intellect with an obsession for math end up

>working on AutoCAD or 3D rendering game engine or industrial statistics and process control or economics or vehicle aerodynamics and be in the top 10% of the world in mathematical ability?

I think this does happen quite a bit and the need for strong math in these difficult areas is so great that there will never be enough people as briliant as Tao to fill the positions.

That's so far outside the mainstream anyway, most systems are going to screen the rare person like that out without understanding why.

Now what happens when those having top 10% of ability are very excellent themselves, but cases come up that would yield only to a Tao level of "natural-born" problem-solver?

Nobody would ever know :\


A mathematician is someone who creates or advances math. Not someone who uses math. If you don't understand how the word is used, that's your problem, not a problem with the statement.

Let me check that claim using a dictionary:

    mathematician /măth″ə-mə-tĭsh′ən/

        A person skilled or learned in mathematics.
    
        One versed in mathematics.
    
        An expert on mathematics. 
    
    The American Heritage® Dictionary of the English Language, 5th Edition • More at Wordnik
None of the definitions require creating or advancing mathematics.

People in the field would never, ever call someone "versed in mathematics" as being a mathematician.

I don't even buy that regular people call someone who is "versed in mathematics" a "mathematician".

If you met a 30 year old person and asked what they do and they said "I'm an athlete" - do you take the dictionary definition there too? Most people will assume they mean a professional athlete.


> absolutely zero chance of the 50th percentile IQ becoming a world class mathematician.

Good, we don't need billions of them anyway.

I wish modern society would quit focusing on individual intelligence over collective intelligence. We can take something like the microprocessor, for example. The smart group that designed the microprocessor was not the same group that designed the software nor the group the built the parts nor the group the assembled the device. However, every group is equally important.


yes, 100%. But nature seems to be wired for competition, so we have the leftover genetic material even if it is to our detriment at this point.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: