Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> A well-trained LLM that lacks any malevolent data, may well be better than a human psychopath who happens to have therapy credentials.

Interesting that in this scenario, the LLM is presented in its assumed general case condition and the human is presented in the pathological one. Furthermore, there already exists an example of an LLM intentionally made (retrained?) to exhibit pathological behavior:

  "Grok praises Hitler, gives credit to Musk for removing 'woke filters'"[0]
> And it may also be better than nothing at all for someone who is unable to reach a human therapist for one reason or another.

Here is a counterargument to "anything is better than nothing" the article posits:

  The New York Times, Futurism, and 404 Media reported cases 
  of users developing delusions after ChatGPT validated 
  conspiracy theories, including one man who was told he 
  should increase his ketamine intake to "escape" a 
  simulation.
> Where for years I heard many people making the same mistake you're making, of saying that silicon could never demonstrate the flair and creativity of human chess players; that turned out to be false.

Chess is a game with specific rules, complex enough to make optimal strategy exhaustive searches infeasible due to exponential cost, yet it exists in a provably correct mathematical domain.

Therapy shares nothing with this other than the time it might take a person to become an expert.

0 - https://arstechnica.com/tech-policy/2025/07/grok-praises-hit...



> Interesting that in this scenario, the LLM is presented in its assumed general case condition and the human is presented in the pathological one.

They were replying to a comment comparing a general case human and a pathological LLM. So yeah, they flipped it around as part of making their point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: