Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is that this is a circular question in that it assumes some definition of "a fancy autocomplete". Just how fancy is fancy?

At the end of the day, an LLM has no semantic world model, by its very design cannot infer causality, and cannot deal well with uncertainty and ambiguity. While the casual reader would be quick to throw humans under the bus and say many stupid people lack these skills too... they would be wrong. Even a dog or a cat is able to do these things routinely.

Casual folks seem convinced LLMs can be improved to handle these issues... but the reality is these shortcomings and inherent to the very approach that LLMs take.

I think finally we're starting to see that maybe they're not so great for search after all.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: