Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I thought addition of the "logical" constraints in the existing training loop using KGs and logical validation would help into reducing wrong semantic formation at the training loop itself. But your point is right that what if the whole knowledge graph is hallucinated during the training itself.

I don't have answer to that. I felt there would be lesser KG representations which would fit a logical world, than what fits into the current vast vector spaces of network's weight and biases. But that's just a idea. This whole thing stems from this internal intuition that language is secondary to my thought process and internally I feel I can just play around concepts without language - what kind of Large X models will meet that kind of capability I don't know!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: