Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And now wait till you realize it's all built on stolen code written by people like you and me.

GOFAI failed because paying intelligent/competent/capable people enough for their time to implement intelligence by writing all the necessary rules and algorithms was uneconomical.

GenAI solved it by repurposing already performed work, deriving the rules ("weights") from it automatically, thus massively increasing the value of that work, without giving any extra compensation to the workers. Same with art, translations and anything else which can be fed into RL.



It's not that it was uneconomical, it's that 1) we literally don't know all the rules, a lot of it is learned intuition that humans acquire by doing, and 2) as task complexity rises, the number of rules rises faster, so it doesn't scale. The real advantage that genAI brings to the table is that it "learns" in a way that can replicate this intuition and that it keeps scaling so long as you can shovel more compute and more data at it.


In a way, yes, you'd be paying the people not just to write down the rules but to discover them first. And there's the accuracy/correctness/interpretability tradeoff.

But also, have there been any attempts on the scale of the Manhattan project attempting to create a GOFAI?

Because one idea I ran into is that we might be able to use genAI to create a GOFAI soon. And it would be as hard as using genAI for any kind of large project. But I also can't convincingly claim that it's somehow provably impossible.


You can’t “write down the rules” for intelligence. Not for any reasonable definition of “writing”. The medium of writing is not rich enough to express what is needed.

This is why GOFAI failed.


Do you believe intelligence can be achieved using ANNs? If so, ANNs can be serialized, therefore writing is rich enough.

It might not be an easy to work with format though. If you believe the broad LLM architecture is capable of reaching true intelligence, then writing is still enough because all LLMs are is the written training data and the written training algorithm. It's just that is was impossible to pay people to write enough training data and provide enough compute to process it before.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: