Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Even in a world where an AI exists, why am I supposed to believe it's going to be able to do any of those things? I do believe it's possible to create one with the same attributes as a human person, it's just anything beyond that is unproven.

Rather, it seems like evidence that singularitarianism is actually a religion (https://en.wikipedia.org/wiki/Millenarianism) which is why it believes things with magic powers will suddenly appear.

In particular, exponential growth doesn't exist in nature and always turns into an S-curve… of course it's a problem if it doesn't level out until it's too late.



What is your estimate of the probability that human intelligence is actually anywhere near the upper limit, rather than some point way further down the S-curve where seemingly exponential growth can still go for a long time?

I'd bet a ton that we're nowhere near the top: evolution almost never comes up with the optimal solution for any problem, almost by definition it stops at "meh, good enough to reproduce". And you don't need a ton of intelligence to reproduce.

Evolution's sub-optimality is actually one of the strongest arguments against intelligent design, so I'm really hesitant to agree that it requires any sort of leap to estimate that with some actual design it won't be very difficult to blow way past human intelligence once we can get there.


> What is your estimate of the probability that human intelligence is actually anywhere near the upper limit, rather than some point way further down the S-curve where seemingly exponential growth can still go for a long time?

Well, define "intelligence". People seem to use it in a vague way here - it might be what you call a motte and bailey. The motte (specific definition) is something like "can do math problems really fast" and the bailey is like "high executive function, is always right about everything, can predict the future".

For the first one I don't think humans are near a limit, mostly because of the bottleneck in how we get born limiting our head sizes. But it is pretty good if you consider the costs of being alive - food requirements, heat dissipation, being bipedal, surviving being hit on the head, risk of brain cancer, etc, it's done well so far.

Similarly an AI is going to have maintenance costs - the more RTX 3090s it runs on, the more calculations it might be able to do, but it's going to have to pay for them and their power bill, and they'll fail or give wrong answers eventually. And where's it getting the money anyway?

As for the second kind I don't think you can be exponentially better at it than a human. At least if you are, it's not through intelligence, but it might be through access to more private information, or being rich enough to survive mistakes. As an example, you can't beat the stock market reliably with smarts, but you can by never being forced to sell.

The real mystery to me is why people say "AI could recursively improve their own hardware and software in short time spans". I mean, that's clearly a made up concept since none of humans, computers or existing AI do it. But the closest thing I can think of is collective intelligence - humans individually haven't improved in the last 10k years, but we got a lot more humans and conquered everyone else that way. But we're also all individuals competing with each other and paying for our own individual food/maintenance/etc, which makes it different from nodes in an ever-growing AI.


Human intelligence is primarily limited by the 6-10 item limit in short-term memory. If you bumped that up by a factor of 5 we could very easily solve vastly more complex problems, and fully visualize solutions an order of magnitude more subtle and messy than humans can manage today.

That's a relatively easy thing to do architecturally once you have a model that can match human intelligence at all. TBH if we could rearchitect the brain in code we could probably easily figure out how to do it in ourselves within a few years, but our wetware does not support patches or bugfixes.

We can't improve ourselves, but that's only because we're meat, not code. And of course no AI has done it yet, because we haven't actually made intelligent AI yet. The question is what happens when we do, not whether the weak-ass statistical crap that we call AI today is capable of self-improvement. Nuclear reactions under the self-sustaining threshold are not dangerous at all, but that was not a good reason to think that no nuclear reaction could ever go exponential and be devastating.


> We can't improve ourselves, but that's only because we're meat, not code.

Doesn't seem like computers can improve themselves either. Mainly because they're made of silicon, not code. "AI can read and write its own code" doesn't exist right now, but even if it did, why is that also implying "AI can read its CPU Verilog and invent new process nodes at TSMC"?

(Also, humans constantly break things when they try changing code - the safest way to not regress yourself would be to not try improving.)


Computers are not as intelligent as humans right now at coding. So it's no surprise that they can't improve code (let alone their own).

If we ever get them there, then it's likely that the usual resourcing considerations will come into play, and refactoring/optimization/redesign will be viable if you throw hours at them. But unlike with human optimization, every hour spent there will increase the effectiveness of future optimizations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: