Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A very good question, and a very hard question to answer. From a point of view of a horse, a super horse is just a faster horse. A 2000cc super-bike is incomprehensible to a horse, because it exists in a different category of speed.

I think similarly, a super AI's intelligence will be a completely different type of intelligence than what we have. For the lack of better word, it exists in a different dimension.

For example, things that are blackbox to us will be completely comprehensible to that super AI. Or it can solve problems that we might have considered impossible to solve.



>I think similarly, a super AI's intelligence will be a completely different type of intelligence than what we have. For the lack of better word, it exists in a different dimension.

This is nonsense, magical thinking. It's possible to model reasoning formally; it's called "logic". A system of deductions built upon some axioms. Any logical reasoning, no matter how complex, can be expressed in such a system, and can be understood by anyone else given enough time.


Humans emulate (or simulate) logic, but it's not our natural state.

Computers are perfect logicians by default, but AFAIK no logic compact enough to be human-comprehensible has been enough to see, hear, or read. At least, not reliably so.

Logic gates are combined to form binary numbers, which are used to label symbols and approximate reals, upon which calculus is approximated in toy models of neurons, which are combined and trained and eventually learn to add numbers and then puts the wrong number of hands onto the third arm of the human it was tasked with drawing.


The 'given enough time' is doing a lot of the heavy lifting in this argument.

Humans approach large complex proofs with symmetry arguments and case splits.

These are not necessarily going to be universal in all kinds of reasoning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: