Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>That's why LLMs aren't able to "solve" problems that don't have existing writeups somewhere (beyond certain problems more or less equivalent to combinatorial search).

Neither can humans. We create by analogy; "thinking outside the box" just means making less obvious (but still valid) analogies.

>Or produce text useful beyond the mild amusement level (or for spam and/or research fraud -- at which it of course excels quite brilliantly).

People are using LLMs very effectively for writing news reports, opinion articles, legal summaries, and computer code. It's already gone well beyond "mild amusement." There's a reason many companies have had to adopt policies regarding LLMs at work: people are already using them at work.

And sure, LLMs still make lots of mistakes. But they are already better writers than the vast majority of humans, and as I noted, the algorithms being applied are pretty simple yet.



Neither can humans. We create by analogy;

This seems to be at (very, very) best unsubstantiated; most likely an insufficient explanation of how humans create.

People are using LLMs very effectively for writing news reports, opinion articles, ...

Well, we certainly differ in our assessments on that. Literally everything I've seen in the first two categories is complete garbage.

In the sense of being either patently unreliable (news), or simply having nothing interesting to say (opinion).


Literally everything I've seen in the first two categories is complete garbage.

You are assuming that you are aware of the origin of everything you read. I suppose that's possible, but unlikely. At this point the use of generative AI is widespread enough that it's likely that you've read material that you were not aware was AI generated, at least in draft form.


I think it depends a lot on the sources one pulls from, actually.

It would be a major scandal indeed if a stalwart source (like the New Yorker, say) were to be caught trolling its readers with generative content. Also very difficult to keep secret for long (if done at scale), the way the literary world works. Assuming that could even happen, given the current SOA (which from the samples I've been seeing, even those touted as "mind-blowing", seems highly doubtful).

Meanwhile if one's daily bread is intrinsically spammy sources like BuzzFeed et al - I agree that one will scarcely notice the difference.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: