Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Unlike language models, children (eventually) learn from their mistakes. Language models happily step into the same bucket an uncountable number of times.


Children are also not frozen in time, kind of a leg up I'd say.


Children prefer warmth and empathy for many reasons. Not always to their advantage. Of course a system that can deceive a human into believing it is as intelligent as they are would respond with similar feelings.


Pretty sure they respond in whatever way they were trained to and prompted, not with any kind of sophisticated intent at deception.


The Turing Test does not require a machine show any “sophisticated intent”, only effective deception:

“Both the computer and the human try to convince the judge that they are the human. If the judge cannot consistently tell which is which, then the computer wins the game.”

https://en.m.wikipedia.org/wiki/Computing_Machinery_and_Inte...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: