Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am so confused too. I hold these beliefs at the same time, and I don't feel they don't contradict each other, but apparently for many people some of these do:

- LLMs are a miraculous technology that are capable of tasks far beyond what we believed would be achievable with AI/ML in the near future. Playing with them makes me constantly feel like "this is like sci-fi, this shouldn't be possible with 2025's technology".

- LLMs are fairly clueless for many tasks that are easy enough for humans, and they are nowhere near AGI. It's also unclear whether they scale up towards that goal. They are also worse programmers than people make them to be. (At least I'm not happy with their results.)

- Achieving AGI doesn't seem impossibly unlikely any more, and doing so is likely to be an existentially disastrous event for humanity, and the worst fodder of my nightmares. (Also in the sense of an existential doomsday scenario, but even just the tought of becoming... irrelevant is depressing.)

Having one of these beliefs makes me the "AI hyper" stereotype, another makes me the "AI naysayer" stereotype and yet another makes me the "AI doomer" stereotype. So I guess I'm all of those!



> but even just the tought of becoming... irrelevant is depressing

In my opinion, there can exist no AI, person, tool, ultra-sentient omniscient being, etc. that would ever render you irrelevant. Your existence, experiences, and perception of reality are all literally irreplaceable, and (again, just my opinion) inherently meaningful. I don't think anyone's value comes from their ability to perform any particular feat to any particular degree of skill. I only say this because I had similar feelings of anxiety when considering the idea of becoming "irrelevant", and I've seen many others say similar things, but I think that fear is largely a product of misunderstanding what makes our lives meaningful.


Thank you, you really are a sweetheart. And correct. But it's not easy to combat the anxiety.


I guess that Sabine's beef with LLM's that they are hyped as a legit "human level assistant" -kind of thing by the business people, which they clearly aren't yet. Maybe I've just managed to... manage my expectations?


That's on her then for fully believing what marketing and business execs are 'telling her' about LLMs. Does she get upset when she buys a coke around Christmas and her life doesn't become all warm and fuzzy with friendliness and cheer all around?

Seems like she's given a drill with a flathead, and just complains for months on end that it often fails (she didnt charge the drill) or gives her useless results (she uses philipheads). How about figuring out what works and what doesn't, and adjusting your use of the tool accordingly? If she is a painter, don't blame the drill for messing up her painting.


I kinda agree. But she seems smart and knowledgeable. It's kinda disappointing, like... She should know better. I guess it's the Gell-Mann amnesia effect once again.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: