> The frustrating thing about your and several other arguments in this submission is that there is no rationale or data. All you are saying is "LLMs are not/cannot be good at therapy". The only (fake) rationale is "They are not humans." The whole comment comes across as tautological.
My comment to which you replied was a clarification of a specific point I made earlier and not intended to detail why LLM's are not a viable substitute for human therapists.
As I briefly enumerated here[0], LLM's do not "understand" relevant to therapeutic contribution, LLM's do not possess a shared human experience to be able to relate to a person, and LLM's do not possess an acquired professional experience specific to therapy on which to draw. All of these are key to "be good at therapy", with other attributes relevant as well I'm sure.
People have the potential to be able to satisfy the above. LLM algorithms simply do not.
My comment to which you replied was a clarification of a specific point I made earlier and not intended to detail why LLM's are not a viable substitute for human therapists.
As I briefly enumerated here[0], LLM's do not "understand" relevant to therapeutic contribution, LLM's do not possess a shared human experience to be able to relate to a person, and LLM's do not possess an acquired professional experience specific to therapy on which to draw. All of these are key to "be good at therapy", with other attributes relevant as well I'm sure.
People have the potential to be able to satisfy the above. LLM algorithms simply do not.
0 - https://news.ycombinator.com/item?id=44589319