Hacker Newsnew | past | comments | ask | show | jobs | submit | jacob019's commentslogin

"The global North's carbon problem subsidizes the global South's energy access." This is problematic. The subsidized economy will grow inefficiently, the wealth transfer will inevitably result in a corrupt class of bureaucrats who seek to maintain the status quo even when it doesn't make sense. Time will pass and it will get worse until there is political will for change, and that change will result in the suffering of those whom the initial intent was to help.


Is it just me or have all mainstream news agencies suffered a significant loss in quality in recent years? It all seems lazy, opinionated, more like social media and less like old school journalism, less trustworthy... and now they're cutting book reviews?! Maybe I'm just getting older.


Journalism was always bad, it just seemed better in the past because people had less to compare it to, less ability to check things out themselves, etc. As for "Old School Journalism", was that the sort that helped George Bush start the Iraq War? Or the sort that started the Spanish-American War? If there was ever a golden age of journalists when people spat straight facts without interjecting their bias, I genuinely have no clue when it was.

You can find an archive of thousands of PBS News Hour episodes online, I've watched dozens of episodes from the 80s and 90s. This show has a tone and air of respectability, a thoughtful show for high brow people who like to consider the facts. But that's really just the surface aesthetic. Besides modern news shows being flagrantly tacky, the meat of what they do is the same; repeat some basic 'facts' about the story, many of which will be proven wrong in later years, then have some people selected through mysterious processes come on to talk about how the viewer should feel. In retrospect very little of it was ever accurate and stories which seemed important then aren't in retrospect.


Well, not to be too obvious, but people do not pay for news anymore, they expect news to be free on Google or social media. Hence firing of journalists, loss of quality everywhere. Less money, less quality.


You're right. It's the collapse of web advertising, I think. News websites can't make money from ads adjacent to articles, so now the article is the ad.


I've been seeing AI slop being used as ad-hominem. If I'm writing a couple paragraphs, I'll often run it through a model and ask it to make minimal edits for spelling and grammar. It makes it more readable and saves me time editing. If someone doesn't like my thoughts and they see an em dash, they can call it AI slop instead of responding, which is really annoying because the model otherwise does a good job of editing. In some cases I've been accused of AI slop for original unedited content.


Some of us are older—and use en dash all the time.


I actually associate it with a younger writer, perhaps it skipped a generation. I can imagine that someone who grew up in a world where typing an em dash meant looking up an alt code would develop a style that avoids them—but it's only a long press of the - away on the device where I do most of my writing by volume so of course I'm going to use them.


Back when the English Internet was Latin-1, one

    would often see expedients like
    this -- or this--or even, most
    minimally, just this-
which held over well enough that I believe Word by default probably still replaces one of them if you use it.


My iPhone changes -- to —


I’ve stopped using em and en dashes for this specific reason.


To hell with that. It will be a bad day when someone accuses me of using AI to forge original work—a bad day for them, not for me.


I just use commas or semi-colons where I would have previously used em dashes. It's annoying to have to adapt to avoid triggering people's faulty AI slop pattern matching, but the alternatives are perfectly fine.


And some of us are the sort of nerds¹ who use Unicode numeric superscript characters for footnotes.

¹ An unstylish or socially awkward person generally devoted to intellectual, academic, or technical pursuits or interests.


Some of us even know where to find them on the iOS—and the Android—keyboard.


I fell in love with them a long time ago when reading Nietzsche aphorisms.


"AI slop" is following the same path as "Dunning-Kruger effect," "enshittification," and so many other terms. Someone introduces a term that's useful to describe an actual phenomenon, it rapidly spreads to dominate the discourse because it's topical and punchy, and pretty soon using it is such a strong signal of being one of the "cool people who hates all the correct bad stuff" that people use it to describe stuff they merely don't like or disagree with. Once everyone's using it, it becomes useless for both its original descriptive purpose and as a social signal, so all the trendy discourse addicts move onto the next linguistic innovation and you only see random people on Facebook or Reddit who are behind the times using it, usually inaccurately as they're just following the misuse they learned it from.

It's particularly scary watching "AI slop" follow that path because of the extreme moral polarization associated with using LLMs or generative art. There's people who will see some casual mention of a game or film or app or something "using AI" on social media without evidence and immediately blast off into a witch hunt to make sure the whole world knows that whoever involved with that thing are Bad People who need to be shunned and punished. It has almost immediately become the go-to way to slam someone online because it carries such strong implications, requires little/no evidence, and is almost impossible to fully refute. Think there's a lot to learn from observing this, and it does not bode well for the next few years of discourse.


Love it! It's going on my toolbar. I face the same problem, constantly trying to hunt down the latest pricing which is often changing. I think it's great that you want to add more models and features, but maybe keep the landing page simple with a default filter that just shows the current content.


Yeah want to keep it really simple. Appreciate it!


Grok, please drive me to synagogue. Doors lock. I'm sorry, Dave, I'm afraid I can't do that.


Destroy all humans.


Do we know parameter counts? The reasoning models have typically been cheaper per token, but use more tokens. Latency is annoying. I'll keep using gpt-4.1 for day-to-day.


I break out Gemini 2.5 pro when Claude gets stuck, it's just so slow and verbose. Claude follows instructions better and seems to better understand it's role in agentic workflows. Gemini does something different with the context, it has a deeper understanding of the control flow and can uncover edge case bugs that Claude misses. o3 seems better at high level thinking and planning, questioning if it should it be done and whether the challenge actually matches the need. They're kind of like colleagues with unique strengths. o3 does well with a lot of things, I just haven't used it as much because of the cost. Will probably use it more now.


Wild. What is the rationale?


Apparently to "keep the peace" and to "protect the children" but I couldn't find any good source on this.

Intuitively it seems to me this is the most counterproductive law ever as living with this doubt is the best way to destroy a family.


Right but I think it's mainly about saving tax payer money from child support by shifting the burden to men.


Protecting the kids I think, because if the dad is not known then the mother will have to pay for the child alone (subsidized by the government). In France around 3% of kids are raised from dads not knowing that they are not the biological father. Personally I think this law is completely unfair but in practice I think the judges will not believe the one opposing the test.


You just can't order a test for someone else (your child) without their consent (so both parents, and a judge because parents don't have absolute rights over their children).

Courts order paternity tests just fine though when there is a reasonable doubt.

The people concerned can always refuse to be tested though.


I had this thought as well and find it a bit surprising. For my own agentic applications, I have found it necessary to carefully curate the context. Instead of including an instruction that we "may automatically attach", only include an instruction WHEN something is attached. Instead of "may or may not be relevant to the coding task, it is up for you to decide"; provide explicit instruction to consider the relevance and what to do when it is relevant and when it is not relevant. When the context is short, it doesn't matter as much, but when there is a difficult problem with long context length, fine tuned instructions make all the difference. Cursor may be keeping instructions more generic to take advantage of cached token pricing, but the phrasing does seem rather sloppy. This is all still relatively new, I'm sure both the models and the prompts will see a lot more change before things settle down.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: