Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>> LLM's do not possess professional experience needed for successful therapy, such as knowing when to not say something as LLM's are not people.

> Most people do not either. That an LLM is not a person doesn't seem particularly notable or relevant here.

Of relevance I think: LLMs by their nature will often keep talking. They are functions that cannot return null. They have a hard time not using up tokens. Humans however can sit and listen and partake in reflection without using so many words. To use the words of the parent comment: trained humans have the pronounced ability to _not_ say something.



All it takes is a modulator that controls whether to let the LLM text through the proverbial mouth or not.

(Of course, finding the right time/occasion to modulate it is the real challenge).


> All it takes is a modulator that controls whether to let the LLM text through the proverbial mouth or not.

> (Of course, finding the right time/occasion to modulate it is the real challenge).

This is tantamount to saying:

  All you have to do to solve a NP-hard[0] problem is to
  make a polynomial solution.

  (Of course, proving P = NP is the real challenge).
0 - https://en.wikipedia.org/wiki/NP-hardness


Your analogy would be better if it were the construction of a heuristic.

GP seems to have a legitimate point though. The absence of a workable solution at present does not imply the impossibility of such existing in the not so distant future.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: