Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am working on creating basic artificial general intelligence. My research is still early, and I don't want to give much details for now. For example, in my opinion it is wrong to feed a giant neural network with tons of text, and expect it to understand human language. That will never work. Language understanding requires a grasp of a world where the agent exist first. A general understanding of things. Then, on top of that, it can build basic communication. And then it can be hoped that this communication can become sophisticated enough, to reach human language level.

Another example: Feeding a neural network with millions of product reviews, and expect it to be capable to understand and write product reviews is hopelessly laughable. Not even with petabytes of data.

I started working on this, because I think that not enough focus is being put on AGI, or research is not creative enough. At least not that is being published. I am optimist with my work, and soon I will reveal more. But even if I don't reach my goals, I think that it is just a matter of persistence. At any moment, someone will solve the problem of AGI.



>I am working on creating basic artificial general intelligence. My research is still early, and I don't want to give much details for now.

Do you have peer-reviewed publications? A github link? References to such? Talking big on the internet is easy.


I agree. I think the next big advancements will come from a different model, not from fine-tuning current ML setups.

I'm more interested in a general AI that can learn any game or environment it encounters to optimize a return. Not quite general AI, but a different path than what is going on right now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: