Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Capable yes, but human equivalence comes at different times, which means AI human equivalnce to humans in general will be staggered, and not a sudden cliff as the author claims. But in all fairness I don't imagine this to be a powerful critique, I wouldn't be at all shocked if I'm wrong.




The point of the article as I see it is that incremental improvements lead to sudden changes, when equivalence is reached in a specific domain.

It's a bit like rising water in a lake or river. You're fine, until you aren't, though the specific "aren't" moment will depend on your elevation relative to the body of water, of the height of any protections (levees, dams, storm walls), and any mitigating mechanisms (e.g., flood control pumps).

Up until the point that your defences are overtopped, your feet are dry. Once overtopped, you're wet.

Same with AGI (again, presuming that the present LLM/GD approach continues to provide returns): a slow but gradual increase in capability can subsume more and more human intellectual / cognitive tasks, and for each individual task, that moment is fairly likely to be rather sudden.


Yeah, I think that's fair. Although for more complex tasks done by humans it may be a bit more gradual in the sense that there's a lto of sub tasks plus this is much more probabilistic than car go faster than horse for cheaper.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: