Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Don’t we all known someone who is very widely read, but makes the occasional error or mistake? Gets confused about something, or is asked about a discipline they don’t understand or haven’t researched very deeply? Maybe they misremember a date here or there, but are otherwise fairly intelligent? Maybe they work a data entry job making a middle class salary.

This is basically where ChatGPT is at. It’s a very widely read person with an excellent memory and quick mind. It’s probably smarter than the average human (certainly in breadth, often in depth), not to mention a comparison of it to the average human globally.

ChatGPT is smarter than the average human already. It doesn’t need agency or a soul to do so. We already have AGI, we just keep moving the goal posts.



We’ve had AGI since 2017. I find it very hard to take AI predictions seriously unless this fact is acknowledged.


Which AGI are you referring to?


Just a guess as I'm not OP, but 2017 is when the "Attention Is All You Need" paper was published.


Attention-based transformer architectures of the same type as GPT were first published in 2017.


The architecture is not AGI. Whether AGI can be acheived within it is perhaps an open question, but that the architecture itself does not constitute AGI is pretty clear.


Transformer architectures like GPT are:

1. Artificial, AKA man-made;

2. General, meaning they are able transfer knowledge to solve arbitrary problems outside of their training domain; and

3. Intelligent in the sense of being able efficiently find efficient solutions to problems which exploit structure of the problem domain.

Artificial General Intelligence: A.G.I.

If you think AGI should mean something else, then that's because the goal posts have moved since the term was defined by GOFAI AI researchers some 2 or 3 decades ago. Some people in the 90's and 00's thought that merely having an AGI system (like ChatGPT) would result in runaway self-improvement leading to god-like singular powers that take over the world. Now some people have taken "AGI" to mean this fictional (and impossible) thing. That's a confusion on their part.


Well, the people working on the state of the art in the field don't seem to think so.


In general I agree with you, but I think we need to figure out how to do really long context lengths before that's really true. A sci fi AI with a real identity would be able to relate to everything that happened in its "life", not just the last 4k tokens.


LORAs are kinda able to do that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: