Very eloquently put :) I agree with your proposition that bad-faith actors often mask their true intentions behind polite or sophisticated formatting.
However, I think a tool like this could still have huge potential, but less for tone and more for structure.
E.g.:
- Atomicity: Ensuring a comment presents a clear, self-contained core argument that can be debated in sub-comments, rather than a tautology or an accumulation of loosely connected arguments.
- Logical consistency: (Though whether an LLM can reliably parse logic is another question entirely!)
- Citations: Checking if the commenter provided credible sources for their claims.
- Civility of Discussion: instead of it becoming another mud battle
- Misinformation: Flagging the use of known, debunked conspiracy theories: Instead of modifying the original comment, it could simply append a contextual banner to the top with a Snopes link when a known false claim is made.
Indeed "Interoperability" is what would hurt social media giants the most - Cory Doctorow recently held an excellent talk where he stated that back in the early 00s Facebook (and others) used interoperability to offer services that allowed to interact, push and pull to mySpace (the big dog back then) to siphon off their users and content. But once Facebook became the dominant player, they moved to make the exact tactics they used (Interoperability and automation) illegal. Talking about regulatory capture ...
Sometimes it feels what we are seeing is Code becoming just like any other "asset" in the globalised economy: cheap - but not quality; just like the priors of clothing (disintegrating after a few washes), consumer electronics (cheap materials), furniture (Instagram-able but utterly impracticable), etc: all made for quick turn-overs to rake in more profit and generate more waste but none made to last long.
Spot on, and beyond the 'double-dipping' business model of "academic publishers" like Elsevier and Springer, there’s a massive systemic issue: taxpayers fund >90% of foundational research, only for private pharma/bio/tech firms to add a thin layer of additional research (or design) on top and then lock it behind patents for decades. Another example of how private interests are offloading the risk and costs to taxpayers while privatizing all the rewards.
"only for private pharma/bio/tech firms to add a thin layer of additional research (or design) on top"
Citation needed.
Go to market cost billions and takes a decade. Doesn't sound like a thin layer. I'm not disputing fundamental research in academia is an essential fuel to keep innovation engines running. But the contributions of biotech is not "thin".
It can be. See glp1. Yes, whoever first came up with that approach is brilliant. But then the lemmings followed now a half dozen or so companies are peddling more or less the same product. And it comes at the cost of what isn’t getting investment at scale instead.
Academic Parma research is mostly billions of dollars, years of effort, a high chance of failure and very specific domain knowledge from the market. If it were so easy to get money this way more people would try
Any taxpayer subsidized industry or subject is a massive magnet for this sort of "complex business that you can't dumb down or eli5 without making it look like a racket because it's fundamentally a racket with responsibility diffused to obfuscate it" stuff because taxpayer money has the most distant of principal agent problem and the government optimizes for "cog in the machine with blinders" employees and silo'd organizations who only care about their own ass so nobody ever takes a step back and says "hey the taxpayer is getting ripped off" until the ripoff is so obvious the taxpayers leann on the politicians.
However, I think a tool like this could still have huge potential, but less for tone and more for structure.
E.g.: - Atomicity: Ensuring a comment presents a clear, self-contained core argument that can be debated in sub-comments, rather than a tautology or an accumulation of loosely connected arguments.
- Logical consistency: (Though whether an LLM can reliably parse logic is another question entirely!)
- Citations: Checking if the commenter provided credible sources for their claims.
- Civility of Discussion: instead of it becoming another mud battle
- Misinformation: Flagging the use of known, debunked conspiracy theories: Instead of modifying the original comment, it could simply append a contextual banner to the top with a Snopes link when a known false claim is made.
reply