They had official trainings on how to use Copilot/ChatGPT and some other tools, security and safety trainings and so on, this is not some people deciding to use whatever feature was there from Ms by default.
> This pattern has already played out in chess and go. For a few years, a skilled Go player working in collaboration with a go AI could outcompete both computers and humans at go. But that era didn't last. Now computers can play Go at superhuman levels. Our skills are no longer required. I predict programming will follow the same trajectory.
Both of those are fixed, unchanging, closed, full information games. The real world is very much not that.
Though geeks absolutely like raving about go and especially chess.
This is like timing the stock market. Sure, share prices seem to go up over time, but we don't really know when they go up, down, and how long they stay at certain levels.
I don't buy the whole "LLMs will be magic in 6 months, look at how much they've progressed in the past 6 months". Maybe they will progress as fast, maybe they won't.
I’m not claiming I know the exact timing. I’m just seeing a trend line. Gpt3 to 3.5 to 4 to 5. Codex and now Claude. The models are getting better at programming much faster than I am. Their skill at programming doesn’t seem to be levelling out yet - at least not as far as I can see.
If this trend continues, the models will be better than me in less than a decade. Unless progress stops, but I don’t see any reason to think that would happen.
That would require accurate validation of said documents, which is extremely hard now. Pointing 1 million PDF LLM machine guns at current validation pipelines will not end well, especially since LLMs are inherently unreliable.
This is lost on people. A 98% accurate automation is useful if you can programmatically identify the 2% of cases that need human review. If you can’t, and it matters, then every case needs human review.
So you lose a lot of benefits to the time sync, but since people tend to have their eye glaze over when the correction rate is low, you may still miss the 2% anyway.
This is going to put a stop to a lot of ideas that sound reasonable on paper.
reply