Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I hate to be the one to say this, but this article reads as though it was written by an LLM. The shallowness is one reason. Another is the lack of any individual voice that would suggest a human author.

And there are the unsupported citations and references:

The sentence “The World Economic Forum’s 2023 Future of Jobs report estimates 83 million jobs may be displaced globally, disproportionately affecting low- and mid-skill workers” is followed by a citation to a book published in 1989.

Footnote 7 follows a paragraph about Nietzsche’s philosophy. That footnote leads to a 2016 paper titled “The ethics of algorithms: Mapping the debate” [1], which makes no reference to Nietzsche, nihilism, or the will to power.

Footnote 2 follows the sentence “Ironically, as people grow more reliant on AI-driven systems in everyday life, many report heightened feelings of loneliness, alienation, and disconnection.” It links to the WEF’s “Future of Jobs Report 2023” [2]. While I haven’t read that full report, the words “loneliness,” “alienation,” and “disconnection” yield no hits in a search of the report PDF.

[1] https://journals.sagepub.com/doi/10.1177/2053951716679679

[2] https://www.weforum.org/publications/the-future-of-jobs-repo...



A positive outcome of LLMs. Regardless if the specific article is AI generated or not, we become increasingly intolerant of shallowness. While in the past we would engage with the token effort of the source, we now draw conclusions and avoid the engagement much faster. I am expecting the quality of real articles to improve to avoid the more sensitive reader filters.


I now notice myself cringe internally whenever I say anything that has become a ChatGPT-ism, even if it's something I always used to say.


I used to write very formally and neutrally, and now I don't, because it comes across as LLM-ish. My sentences used to lack "humanity", so to speak. :(


I'm a member of the ACM, so I would report this article.

However, I think the author may just have made some mistakes and mixed up/-1'd their references, since the 2023 report is actually #2

2. Di Battista, A., Grayling, S., Hasselaar, E., Leopold, T., Li, R., Rayner, M. and Zahidi, S., 2023, November. Future of jobs report 2023. In World Economic Forum (pp. 978-2).

Similarly, Footnote 7 probably should probably point to #8

8. Nietzsche, F. and Hollingdale, R.J., 2020. Thus spoke zarathustra. In The Routledge Circus Studies Reader (pp. 461-466). Routledge.


The Communications of the ACM no longer has an editor?


Suppose you've managed to get a job as an editor at Communications of the ACM. As "Editor, Communications of the ACM" what do you think your job is?


Possibly displaced by an LLM.


Nope, I'm still the Editor-in-Chief, and the last time I checked, I'm not an LLM. Nor are the other 100+ associate editors of the magazine.

I want to point out that this is a blog post appearing on the CACM website. It was not reviewed or edited by CACM, beyond a few cursory checks.


Now that makes more sense.

I guess it doesn't help that the post is formatted as a typical article with the bio blurb. It's worth distinguishing the blog entries more and perhaps posting a disclaimer. After all when people think of CACM they don't generally have blogs in mind.


In addition to this, another telltale sign of LLM authorship are the repeated forced attempts to draw connections or parallels where they're nonsensical, trying to fulfill an essay prompt that doesn't - in those instances - have much meat to it.

    > As AI systems increasingly mediate decisions [...], decisions once 
    > grounded in social norms and public deliberation now unfold within 
    > technical infrastructure, beyond the reach of democratic oversight.

    > This condition parallels the cultural dislocation Nietzsche observed in 
    > modern Europe, where the decline of metaphysical and religious authorities 
    > undermined society’s ability to sustain shared ethical meaning. In both 
    > cases, individuals are left navigating fragmented norms without clear 
    > foundations or frameworks for trust and responsibility. Algorithmic 
    > systems now make value-laden choices, about risk, fairness, and worth, 
    > without mechanisms for public deliberation, reinforcing privatized, 
    > reactive ethics.
Note how "algorithmic systems" making "value-laden choices" reinforcing "privatized, reactive ethics" has absolutely nothing to do with the spiritual value collapse that Nietzsche, who was uninterested in or even opposed to critiques of power structures, and who wasn't much impressed by the whole idea of democracy, saw in 19th century Europe. While the criticism of AI systems being beyond the reach of democratic oversight is a common and perfectly valid one, it just simply doesn't touch on Nietzsche's philosophy; yet the LLM piece uses language and turns of phrase ("parallels", "in both cases") to make it sound as if there were a connection.

If I am strongly opposed to anti-democratic opaque AI surveillance machines, then I am not an individual "left navigating fragmented norms without clear foundations", on the contrary, my foundations are quite clear indeed; and on the other hand, increased automation causing the erosion of "frameworks for trust and responsibility" seems more likely to be welcomed by Nietzsche, who had little patience for moral affectations like responsibility, than opposed.


At this point I regularly see front-page HN articles that are LLM written (amusingly sometimes accompanied by comments praising how much of a breath of fresh air the article is compared to usual "LLM slop").

I worry about when I no longer see such articles (as that means I can no longer detect them), which likely will be soon enough.


Love the optimism couched as pessimism. LLM training data doesn't do that.


Beyond the cringe of posting AI slop that 'argues' about eroding social norms and declining trust due to AI there's also this:

"The prestige and unmatched reputation of Communications of the ACM is built upon a 60-year commitment to high quality editorial content"

Hmmm. Ok whatever you say folks


It was written by an LLM because it’s another hype piece for AI.


thanks for pointing this out. The concepts in the article are important to me, but yeah thats weird.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: