Go look at the graphs again. The “split by age” graph shows an increase in diagnosis of ~60%, but an increase in mortality of only ~10%. That’s not a small difference, and we aren’t that good at curing colon cancer.
GP’s hypothesis is one of the leading explanations for this trend, but of course gets rejected by advocates for colonoscopy. Taking into account error bars on these numbers (which author doesn’t show, because they are inconvenient to the argument being made), it seems at least somewhat likely that the explanation for the rise in younger cases is due to increased screening, with the “increased” mortality either being statistical noise, or misattribution of deaths that also would have occurred in earlier periods.
Another, related issue is that the takedown mechanism becomes a de facto censorship mechanism, as anyone who has dealt with DMCA takedowns and automated detectors can tell you.
Someone reports something for Special Pleading X, and you (the operator) have to ~instantly take down the thing, by law. There is never an equally efficient mechanism to push back against abuses -- there can't be, because it exposes the operator to legal risk in doing so. So you effectively have a one-sided mechanism for removal of unwanted content.
Maybe this is fine for "revenge porn", but even ignoring the slippery slope argument (which is real -- we already have these kinds of rules for copyrighted content!) it's not so easy to cleanly define "revenge porn".
DMCA isn't directly that bad. DMCA is under penalty of perjury, so false take downs are rare.
The problem is most take downs are not actually DMCA, they are some other non-legal process that isn't under any legal penalty. Though if it ever happens to you I suspect you have a good case against whoever did this - but the lawyer costs will far exceed your total gain. (as in spend $30 million or more to collect $100). Either we need enough people affected by a false non-DMCA take down that a class action can work (you get $0.50 but at least they pay something), or we need legal reform so that all take downs against a third party are ???
> DMCA is under penalty of perjury, so false take downs are rare.
Maybe true with the platonic ideal "DMCA takedown letter" (though these are rarely litigated, so who really knows), but as you note, they're incredibly common with things like the automated systems that scan for music in videos (and which actually are related to DMCA takedowns), "bad words" and the like.
> The problem is most take downs are not actually DMCA, they are some other non-legal process that isn't under any legal penalty.
It's true that most takedowns in the US aren't under DMCA, but even that once-limited process has metastasized into large, fully automated content scanning systems that proactively take down huge amounts of content without much recourse. Companies do this to avoid liability as part of safe harbor laws, or just to curry favor with powerful interests.
We're talking about US laws here, but in general, these kinds of instant-takedown laws become huge loopholes in whatever free speech provisions a country might have. The asymmetric exercise of rights essentially guarantees abuse.
I believe google issues legitimate dmca takedowns for copyright strikes, even when there is no infringement. They put the work to defend the strike on the apparent commiter, often with little to no detail.
While the false takedown may be rare. Using dmca as a mechanism to inflict pain where no copyright infringement has taken place is indeed common enough that it happens to small time youtubers like myself and others I have talked to.
> Sometimes I feel like this is taken to the extreme for non-scientists that say that lack of evidence is in itself evidence.
But of course, the lack of evidence is itself evidence, if you have a sufficiently large data sample and haven't seen the thing you're looking for. Keep pursuing the increasingly unlikely outcome, and you're just engaged in science-flavored religious catechism.
I see the fallacy routinely misapplied by all sides of most hot-button science-meets-politics issues. A great many scientists will regularly substitute their own pet theories for conclusions, and strenuously ignore the lack of supporting evidence, citing the old "absence of evidence is not evidence of absence" saw. Then they turn around and mock "non-scientists" for doing the same thing. Neither side is right, of course, but dressing in a lab coat doesn't make it better.
Just to circle it back to the topic of science education, I'd love to see a science curriculum at the middle- and high-school level that equipped people to reason through this kind of thing by focusing on tearing apart pop research. A "science fair" is actually hard to do well just because most science fails, but a "scientific bullshit fair" would have almost infinite fertile ground from nutrition studies alone.
It's also really damned hard to come up with an interesting, novel question, that is testable, with resources available to the average school child, in a reasonable amount of time.
Allowing engineering opens up the workable space by quite a bit.
My brother and I, for example, did an experiment where we tested pH of various water bodies around us. The hypothesis was based off of local drainage patterns.
LLMs are the eternal September for software, in that the sort of people who couldn’t make it through a bootcamp can now be “programming thought leaders”. There’s no longer a reliable way to filter signal from noise.
Those 3000 early adopters who are bookmarking a trivial markdown file largely overlap with the sort of people who breathlessly announce that “the last six months of model development have changed everything!”, while simultaneously exhibiting little understanding of what has actually changed.
There’s utility in these tools, but 99% of the content creators in AI are one intellectual step above banging rocks together, and their judgement of progress is not to be trusted.
> Sometimes I just bookmark things because I think to myself “Maybe I’ll try this out, when I have time” which then likely never happens.
For me that’s 100% of the time. I only bookmark or star things I don’t use (but could be interesting). The things I do use, I just remember. If they used to be a bookmark or star, I remove it at that point.
I'm sure i'll piss off a lot of people with this one but I don't care any more. I'm calling it what it is.
LLMs empower those without the domain knowledge or experience to identify if the output actually solves the problem. I have seen multiple colleagues deliver a lot of stuff that looks fancy but doesn't actually solve the prescribed problem at all. It's mostly just furniture around the problem. And the retort when I have to evaluate what they have done is "but it's so powerful". I stopped listening. It's a pure faith argument without any critical reasoning. It's the new "but it's got electrolytes!".
The second major problem is corrupting reasoning outright. I see people approaching LLMs as an exploratory process and let the LLM guide the reasoning. That doesn't really work. If you have a defined problem, it is very difficult to keep an LLM inside the rails. I believe that a lot of "success" with LLMs is because the users have little interest in purity or the problem they are supposed to be solving and are quite happy to deliver anything if it is demonstrable to someone else. That would suggest they are doing it to be conspicuous.
So we have a unique combination of self-imposed intellectual dishonesty, mixed with irrational faith which is ultimately self-aggrandizing. Just what society needs in difficult times: more of that! :(
> LLMs are the eternal September for software, in that the sort of people who couldn’t make it through a bootcamp can now be “programming thought leaders”
Do you ever filter signal from noise by the quality of the code? The code written by the Google founders was eventually rewritten by others, and it was likely worse than what a fresh grad produces today. Still, that initial search engine is the most influential thing they ever built, and it's something the modern Bay Area will probably never create again.
The Markdown file looks like it’s written for people who either haven’t discovered Plan mode, or who can’t be bothered to read a generated plan before running with it.
Too early to tell, so let's wait and see before we brush that off.
>> in that the sort of people who couldn’t make it through a bootcamp can now be “programming thought leaders”
>Snobbery.
Reality and actually a selling point of AI tools. I see pretty often ads for making apps without any knowledge of programming
>> the sort of people who breathlessly announce
> Snobbery / Cliche.
Reality
>> There’s no longer a reliable way to filter signal from noise.
> Cliche.
Reality, or do you destinguish a well programmed app from unaudited BS
>> There’s utility in these tools, but 99% of the content creators in AI are one intellectual step above banging rocks together
>Cliche / Snobbery.
99% is to high, maybe 50%
>> their judgement of progress is not to be trusted
> Tell me, timr, how much judgement is there is in snotty gatekeeping and strings of cliches?
We have many security issues in software coded by people who have experience in coding, how much do you trust software ordered by people who can't jusge if the program they get is secure or full of security flaws?
Don't forget these LLMs are trained on pre existing faulty code.
> It's hard to communicate the difference the last 6 months has seen.
No, it isn't. The hypebeast discovered Claude code, but hasn't yet realized that the "let the model burn tokens with access to a shell" part is the key innovation, not the model itself.
I can (and do) use GH Copilot's "agent" mode with older generation models, and it's fine. There's no step function of improvement from one model to another, though there are always specific situations where one outperforms. My current go-to model for "sit and spin" mode is actually Grok, and I will splurge for tokens when that doesn't work. Tools and skills and blahblahblah are nice to have (and in fact, part of GH Copilot now), but not at all core to the process.
> No, there is Github Copilot, the AI agent tool that also has autocomplete, and a chat UI.
When it came out, Github Copilot was an autocomplete tool. That's it. That may be what the OP was originally using. That's what I used... 2 years ago. That they change the capabilities but don't change the name, yet change names on services that don't change capabilities further illustrates the OP's point, I would say.
To be fair, Github Copilot (itself a horrible name) has followed the same arc as Cursor, from AI-enhanced editor with smart autocomplete, to more of an IDE that now supports agentic "vibe coding" and "vibe editing" as well.
I do agree that conceptually there is a big difference between an editor, even with smart autocomplete, and an agentic coding tool, as typified by Claude Code and other CLI tools, where there is not necessarily any editor involved at all.
Thanks... 2 years felt a bit too recent. I think I was trialing copilot in late 2022, and then got turned on to ... codeium/windsurf in late 2023. The years are merging together now. :/
That's silly. Gmail is a wildly different product than it was when it launched, but I guess it doesn't count since the name is the same?
Microsoft may or may not have a "problem" with naming, but if you're going to criticize a product, it's always a good starting place to know what you're criticizing.
The confusion is when I say “I have a terrible time using Copilot, I don’t recommend using it” and someone chimes in with how great their experience with Github Copilot is, a completely different product and how I must be “holding it wrong” when that is not the same Copilot. That Microsoft has like 5 different products all using Copilot in the name, even people in this very comment section are only saying “Copilot” so it is hard to know what product they are talking about!
I mean, sure. But aside from the fact that everything in AI gets reduced to a single word ("Gemini", "ChatGPT", "Claude") [1], it's clearly not an excuse for misrepresenting the functionality of the product when you're writing a post broadly claiming that their AI products don't work.
Github Copilot is actually a pretty good tool.
[1] Not just AI. This is true for any major software product line, and why subordinate branding exists.
I specifically mention that my experience is with the Office 365 Copilot and how terrible that is and in online discussions I mention this and then people jump out of the woodwork to talk about how great Github Copilot is so thank you for demonstrating that exact experience I have every time I mention Copilot :)
GitHub Copilot is available from website https://github.com/copilot together with services like Spark (not available from other places), Spaces, Agents etc.
This absolutely sucks, especially since tool calling uses tokens really really fast sometimes. Feels like a not-so-gentle nudge to using their 'official' tooling (read: vscode); even though there was a recent announcement about how GHCP works with opencode: https://github.blog/changelog/2026-01-16-github-copilot-now-...
No mention of it being severely gimped by the context limit in that press release, of course (tbf, why would they lol).
However, if you go back to aider, 128K tokens is a lot, same with web chat... not a total killer, but I wouldn't spend my money on that particular service with there being better options!
I mean, it's definitely not perfect, but consider that Claude is a 200k window (unless you're a beta user with access to more), and it's not the tragedy that you're making it out to be.
My experience is that the models all lose focus long before they fill their context window, so I'm not crying over the lower limit.
If it makes you feel any better, the problem you’re describing is as old as peer review. The authors of a paper only have to get accepted once, and they have a lot more incentive to do so than you do to reject their work as an editor or reviewer.
This is one of the reasons you should never accept a single publication at face value. But this isn’t a bug — it’s part of the algorithm. It’s just that most muggles don’t know how science actually works. Once you read enough papers in an area, you have a good sense of what’s in the norm of the distribution of knowledge, and if some flashy new result comes over the transom, you might be curious, but you’re not going to accept it without a lot more evidence.
This situation is different, because it’s a case where an extremely popular bit of accepted wisdom is both wrong, and the system itself appears to be unwilling to acknowledge the error.
Back when I listened to NPR, I shook my fist at the radio every time Shankar Vidantim came on to explain the latest scientific paper. Whatever was being celebrated, it was surely brand new. It's presentation on Morning Edition gave it the imprimature of "Proofed Science", and I imagined it getting repeated at every office lunch and cocktail party. I never heard a retraction.
Please don't lazily conclude that he's gone crazy because it doesn't align with your prior beliefs. His work on Covid was just as rigorous as anything else he's done, but it's been unfairly villainized by the political left in the USA. If you disagree with his conclusions on a topic, you'd do well to have better reasoning than "the experts said the opposite".
Ioannidis' work during Covid raised him in my esteem. It's rare to see someone in academics who is willing to set their own reputation on fire in search of truth.
Vinay Prasad is an oncologist who made his career calling out many of the FDA’s brazenly stupid oncology drug approvals, so I am shocked - shocked! - that the creator of that process would be unhappy with his leadership.
“Foxes agree that henhouse security changes will lead to hungry animals.” News at 11.
Go watch or listen to Plenary Session, and you'll have direct access to his thoughts. My ability to rehash Prasad's arguments doesn't have any bearing on what I wrote.
GP’s hypothesis is one of the leading explanations for this trend, but of course gets rejected by advocates for colonoscopy. Taking into account error bars on these numbers (which author doesn’t show, because they are inconvenient to the argument being made), it seems at least somewhat likely that the explanation for the rise in younger cases is due to increased screening, with the “increased” mortality either being statistical noise, or misattribution of deaths that also would have occurred in earlier periods.
reply