Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

TLDR;

No amount of fantastical thinking is going to coax AGI out of a box of inanimate binary switches --- aka, a computer as we know it.

Even with billions and billions of microscopic switches operating at extremely high speed consuming an enormous share of the world's energy, a computer will still be nothing more than a binary logic playback device.

Expecting anything more is to defy logic and physics and just assume that "intelligence" is a binary algorithm.





The article doesn't say anything along those lines as far as I can tell - it focuses on scaling laws and diminishing returns ("If you want to get linear improvements, you need exponential resources").

I generally agree with the article's point, though I think "Will Never Happen" is too strong of a conclusion, whereas I don't think the idea that simple components ("a box of inanimate binary switches") fundamentally cannot combine to produce complex behaviour is well-founded.


> Expecting anything more is to defy logic and physics.

What logic and physics are being defied by the assumption that intelligence doesn't require the specific biological machinery we are accustomed to?

This is a ridiculous comment to make, you do nothing to actually prove the claims you're making, which are even stronger than the claims most people will make about the potential of AGI.


What logic and physics are being defied

The logic and physics that make a computer what it is --- a binary logic playback device.

By design, this is all it is capable of doing.

Assuming a finite, inanimate computer can produce AGI is to assume that "intelligence" is nothing more than a binary logic algorithm. Currently, there is no logical basis for this assumption --- simply because we have yet to produce a logical definition of "intelligence".

Of all people, programmers should understand that you can't program something that is not defined.


> By design, this is all it is capable of doing. Assuming a finite, inanimate computer can produce AGI is [...]

Humans are also made up of a finite number of tiny particles moving around that would, on their own, not be considered living or intelligent.

> [...] we have yet to produce a logical definition of "intelligence". Of all people, programmers should understand that you can't program something that is not defined.

There are multiple definitions of intelligence, some mathematically formalized, usually centered around reasoning and adapting to new challenges.

There are also a variety of definitions for what makes an application "accessible", most not super precise, but that doesn't prevent me improving the application in ways such that it gradually meets more and more people's definitions of accessible.


Are you a programmer? Are you familiar with Alan Turing [0]?

What do you mean by finite, are you familiar with the halting problem? [1]

What does "inanimate" mean here? Have you seen a robot before?

Imprecise language negates your entire argument. You need to very precisely express your thoughts if you are to make such bold, fundamental claims.

While it's great that you're taking an interest in this subject, you're clearly speaking from a place of great ignorance, and it would serve you better to learn more about the things you're criticizing before making inflammatory, ill-founded claims. Especially when you start trying to tell a field expert that they don't know their own field.

Using handwavy words you don't seem to understand such as "finite" and "inanimate" while also claiming we don't have a "logical definition" (whatever that means) of intelligence just results in an incomprehensible argument.

[0] https://en.wikipedia.org/wiki/Turing_machine [1] https://en.wikipedia.org/wiki/Halting_problem


I'll take the other side of the bet.

Human intelligence seems likely to be a few tricks we just haven't figured out yet. Once we figure it out, we'll probably remark on how simple a model it is.

We don't have the necessary foundation to get there yet. (Background context, software/hardware ecosystem, understanding, clues from other domains, enough people spending time on it, etc.) But one day we will.

At some point people will try to run human-level AGI intelligences on their Raspberry Pi. I'd almost bet that will be a game played in the future - run human-level AGI intelligences on as low a spec machine as possible.

I also wonder what it would be like if the AGI / ASI timeline coincide with our ability to do human brain scans at higher fidelity. And that if they do line up, that we might try replicating our actual human thoughts and dreams on our future architectures as we make progress on AGI.

If those timelines have anything to do with one another, then when we crack AGI, we might also be close to "human brain uploads". I wouldn't say it's a necessary precondition, but I'd bet it would help if the timelines aligned.

And I know the limits of detection right now and in the foreseeable future are abysmal. So AGI and even ASI probably come first. But it'd be neat if they were close to parallel.


This is not what the article says at all.

The article is about the constraints of computation, scaling of current inference architecture, and economics.

It is completely unrelated to your claim that cognition is entirely separate from computation.


So is the “binary” nature of today’s switches the core objection? We routinely simulate non-binary, continuous, and probabilistic systems using binary hardware. Neuroscientific models, fluid solvers, analog circuit simulators, etc., all run on the same “binary switches,” and produce behavior that cannot meaningfully be described as binary, only the substrate is.

Binary logic (aka a computer) can be used to model or simulate anything that has a clear mathematical definition.

Currently, "intelligence" is lacking a clear mathematical definition.


> Currently, "intelligence" is lacking a clear mathematical definition.

On the contrary, there are many. You just don't like them. E.g. skill of prediction.


Yes, I don't like them --- because they only offer language prediction --- not to be confused with intelligence.

Your criteria is the lack of randomness and determinism by the sound of that.

What if i had an external source of trye randomness? Very easy to add. In fact current ai algorithms have a temperature parameter that can easily utilise true randomness if you want it to.

Would you suddenly change your mind and say ok ‘now it can be AGI!’ because i added a nuclear decay based random number generator to my ai model?


I hope to preprint my paper for your review on arxiv next week titled:

"A Novel Bridge from Randomness in Stochastic Data to, Like, OMG I'm SO Randomness in Valley Girl Entropy"

We will pay dearly for overloading that word. Good AGI will be capable of saying the most random things! Not, really, no. I mean, they'll still be pronounceable, I'm guessing?


Do you agree with that wild claim?

I tend to wonder whether people who make claims like that are confusing intelligence with consciousness. The claim as stated above could be a summary of a certain aspect of the hard problem of consciousness: that it's not clear how one could "coax consciousness out of a box of inanimate binary switches" - but the connection to "intelligence" is dubious. Unless of course one believes that "true" intelligence requires consciousness.

I believe that "intelligence" requires a clear mathematical definition in order to be simulated using binary logic.

What do you believe?


You're ignoring the point of abstraction.

Intelligence can be expressed in higher order terms than the logic that the binary gates running the underlying software is required to account for.

Quarks don't need to account for atomic physics. Atomic physics doesn't need to account for chemistry. Chemistry doesn't need to account for materials science. It goes on and on. It's easy to look at a soup of quarks and go, "there's no way this soup of quarks could support my definition of intelligence!", but you go up the chain of abstraction and suddenly you've got a brain.

Scientists don't even understand yet where subjective consciousness comes into the picture. There are so many unanswered questions that it's preposterous to claim you know the answers without proof that extends beyond a handwavy belief.


That's an unreasonably high bar.

We already have irrefutable evidence of what can reasonably be called intelligence, from a functional perspective, from these models. In fact in many, many respects, the models outperform a majority of humans on many kinds of tasks requiring intelligence. Coding-related tasks are an especially good example.

Of course, they're not equivalent to humans in all respects, but there's no reason that should be a requirement for intelligence.

If anything, the onus lies on you to clarify what you think can't be achieved by these models, in principle.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: