IMO current generation models are capable of creating significantly better than "slop" quality content. You need only look at NotebookLM output. As models continue to improve, this will only get better. Look at the rate of improvement of video generation models in the last 12-24 months. It's obvious to me we're rapidly approaching acceptable or even excellent quality on-demand generated content.
I feel like you're conflating quality with fidelity. Video generation models have better fidelity than they did a year ago, but they are no closer to producing any kind of compelling content without a human directing them, and the latter is what you would actually need to make the "infinite entertainment machine" happen.
The fidelity of a video generation model is comparable to an LLMs ability to nail spelling and grammar - it's a start, but there's more to being an author than that.
I already feel like text models are already at sufficiently entertaining and useful quality as you define it. It's definitely possible we never get there for video or 3D modalities, but I think there are strong enough economic incentives such that big tech will dump tens of billions of dollars into achieving it.
I don't know why you think that's the case regarding text models. If that was the case, there would be articles on here that are just created by only generative AI and nobody would know the difference. It's pretty obvious that's not happening yet, not the least of which because I know what kinds of slop state-of-the-art generative models still produce when you give them open-ended prompts.
Ironic how this comment exemplifies the issue - broad claims about "slop" output but no specific examples or engagement with current architectures. Real discussions here usually reference benchmarks or implementation details.
You're sort of ignoring the issue? If the generated content was good and interesting enough on it's own, we would already have ai publishing houses pushing out entire trilogies, and each of those would be top sellers.
Generative content right now is OK. OK isn't really the goal, or what anyone wants.
First it was AI articles, raising it to entire successful book trilogies seems like a much bigger leap. Even considering the largest context windows they wouldn't directly fit and there is much less data to train context of that size on fiction than the data out there for essays and articles.
I don't think it is there yet for articles either.
My point with the Claude generated comment was maybe it could get pretty close to something like an hn comment.
I feel like this is missing the point of GenAI. I read fewer books than I did a year ago, primarily because Claude will provide dynamic content that is exactly tailored for me. I don't read many instructional books any more, because I can tell Claude what I already know about a topic and what I'd like to know and it'll create a personalised learning plan. If I don't understand something, it can re-phrase things or find different metaphors until I do. I don't read self-help books written for a general audience, because I can get personalised advice based on my specific circumstances and preferences.
The idea of a "book" is really just an artifact of a particular means of production and distribution. LLM-generated text is a categorically different thing from a book, in the same way as a bardic poem or hypertext.
NotebookLM is still slop. I recommend feeding it your resume and any other online information about you. It's kind of fun to hear the hosts butter you up, but since you know the subject well you will quickly notice that it is not faithful to the source material. It's just plausibly misleading.