Sounds interesting and challenging. There's something similar, although not the build part just the modular aspect of it inspired by CPAN called CCAN: https://ccodearchive.net/. Very few people know about it, I believe, and it goes way back. I'm not involved with that project, though. Good luck!
Agree. The VFS is a delight to read. It's a good intro to the kernel pattern of using function pointers to provide a generic API which other functionality can plug into, simply by implementing the appropriate functions. In this case you'll see all the filesystem drivers implement the VFS operations.
Agree. Being enough for these people isn't enough, they also think we should care what they have to say.
I would not include simonw with the other 3 at all. His opinions are informed by use instead of just other thoughts. Simon also releases a lot of utility tools.
These are sales speeches. They are over hyping it. AI has its niche but they are trying to plug it everywhere.
Maybe they have a point. Maybe hooking people up on software tools so they are incompetent without will pay off.
Hinton talks are hilarious. He goes from explaining perceptrons to huge claims like "this is what understanding is", "that's how our brain works", "linguists are cranks if they don't agree", and "this proves there is no god" but the last one as some weird little anecdote involving some religious migrant worker, and implies heavily that disagreeing with him is equivalent with having religious beliefs.
In general, the idea with technology is it's cheaper to do things.
It's easier to start the next widget company because building widgets with the technology is cheaper.
It's easier to consume other things because goods are cheaper to make with the tech.
A third option is that the tech enables something all together new, eg television, that starts a new industry.
As far as direct job creation, the third way is the most obvious but probably not the case at the moment. So I guess we're stuck waiting for goods and services made with AI to get cheaper.
Seriously, carpenters and plumbers and maybe more electricians. All those copy writers, graphic designers and junior coders are going to have to do something else.
I'm joking, but not. I think a lot of fat is going to get trimmed of the bone of most industries. That may include myself which is a worry.
With this much shoddy code out there, I'm expecting a LOT more testers are going to be needed. This is job creation but not the good kind.
Think Tesla Optimus robot grabbing popcorn. Instead of one cashier filling bags of popcorn, you now have a dedicated teleoperator + sweeper to pickup all the popcorn that's spilled. One job turned into two jobs.
In the macro sense, jobs are jobs, we're just going to see labor move to different places. And AI will hit a limit for usefulness. We're still early in this adventure. guard rails are coming, but we're going to see some big crashes yet.
It's just basic economics; when something makes the economy more efficient, it doesn't destroy jobs forever, people get new, better jobs in a more efficient economy.
Perhaps AI will finally raise the bar so high lower-IQ individuals won't be able to find any meaningful employment, but this has never happened in before and I doubt it'll happen again. I don't think I've seen a respected economist go on a news broadcast and say AI will lead to mass unemployment.
How does "ai" make "the economy" more efficient exactly ?
> people get new, better jobs
Which law dictate that these jobs are better ?
I don't see people getting better jobs personally, I see a shit load of people pushed into more and more precarious jobs, with less and less workers rights and job security, with stagnating wages despite rise in productivity, &c.
Yes, if you look up any video of an economist talking about AI, they'll bring up the industrial revolution and point out how people thought it would bring the end of employment and instead created more and better jobs. The generally accepted viewpoint is AI will make things more efficient, and thus people will be freed to do other, more important work.
How big is the pool of legit economists who post videos AND know enough about "ai" to talk with any kind of authority ?
99% of economists couldn't see any of the big downturns up to 1hr before shit hit the fan. Yet they all retrospectively can upload 12 hours long video on youtube to explain you how the dot com or 2008 crisis were so obvious you'd have to be blind to not have seen them coming...
> and thus people will be freed to do other, more important work.
"freed" is a weird word to employ, given the century+ long race to automation shouldn't we all have been freed by now ? Why do so many people work min wage or other type of precarious jobs ? Why will I retire later than my parents and grand parents ?
And what do you mean by "more important" ? Most jobs are bullshit and workers know it, they don't want important jobs, they want stables jobs that pay enough to live life, and "ai" is going to do anything but improve that
> and thus people will be freed to do other, more important work
In a post-labor-scarcity world, there is no longer any "other, more important work."
Which means there's no basis for paying labor currency, which means there's no way for labor to buy things, which means we need a new method to ensure people can get things.
Seeing as there's no precedent for a post-scarcity world, and no reason to think such a thing is possible or desirable, I'm not sure how to discuss the topic in a serious fashion. As long as people want to keep improving society, it will require resources, which will produce scarcity.
If you commoditize both thinking and action, even at an average human capability level, then I'd argue that would be a post-scarcity world. At least as far as working humans are concerned.
Yeah, lower IQ people can use AI too...it might even close the gap to the high IQ people depending on their talents, what they learn, what they apply it to, and so on.
So you're saying that members of Congress can finally catch up to the rest of America with technological help? Although that might be true, they're scared of computers. They only recently mastered the fax machine, but can't figure out how to stop the blinking time on their microwave.
> high lower-IQ individuals won't be able to find any meaningful employment, but this has never happened in before and I doubt it'll happen again
"Meaningful" is a bit weasely here. If a skilled factory worker had their job offshored in the past and wound up employed at Walmart after, they did not find "meaningful" work
> it doesn't destroy jobs forever, people get new, better jobs in a more efficient economy.
Still doesn’t explain how new jobs are created. Efficient economy doesn’t mean more jobs. You could displace a thousand workers and create a single new better job, but that means nothing.
For a simple model, let's say you hire programmers for three reasons:
1. Because you have (X) work to get done to run your business. Once that work is done, there is no more work to do.
2. Because they get work done that makes more money than you pay them, but with diminishing marginal returns. So the first programmer is worth 20x their salary, the 20th is worth 1.01x their salary.
3. Because you have some new idea to build, and have enough capital to gamble on it. If it succeeds before you run out of money, you'll revert to (1) or (2).
Let's assume AI comes along that means a programmer can do 4x the work. If most programmers are in the first bucket, then you only need 1/4 as many programmers and most will be fired.
If most programmers are in the second bucket, then suddenly there's _much_ more stuff that can be built (and money made) per-programmer. So businesses will be incentivized to hire many more programmers.
For programmers in the third bucket, our AI makes more likely to get built in time, thus ups the odds of success a little.
How you think the market is structured decides how you think AI will effect job creation and destruction.
> So the first programmer is worth 20x their salary, the 20th is worth 1.01x their salary.
> If most programmers are in the second bucket, then suddenly there's _much_ more stuff that can be built (and money made) per-programmer. So businesses will be incentivized to hire many more programmers.
How do these two reconcile? If hiring more programmers is diminishing returns, why would a business hire more?
Because you've moved the frontier of who's making you money out. Now the 20th programmer is still worth 5x their salary, and you hire up to the 30th programmer to hit 1.01.
I'm not sure if you're being serious but it was just poor phrasing. It really should have read "AI may raise the bar higher, such that low-IQ individuals ...".
That said, I'm sort of assuming simple tasks will get done with AI, which isn't a given and is something I could easily be wrong about. AI could easily just benefit everyone, as has been the case for every prior technological advancement in human history
Probably if the capital owners who have automated away the need for intellectual labor want to take advantage of their increased income and the increasingly cheap/desperate labor, they might hire back some of the fired white-collar AI-replaced folks to give them Roman villa treatment (hand-feeding grapes, performing plays and theater at their homes, ever more elaborate massage/grooming/pampering or the more illicit variety).
One team to re-do the work to double check the right answer.
The second team to reconcile the right answer with the AI result.
I'm not even joking ... I've professionally been extremely embarrassed once by an AI result. Now I check it so often that I might as well just do the work I asked it to do.
No doubt, quick questions and rough ideas, LLM is the bomb.
Anything that contains drudgery that nobody wants to do. AI is automating away all the cool creative jobs, leaving only the garbage ones. Once robotics is up there, those garbage jobs will be gone as well. Then the humanity implodes within 2 generations.
I wouldn't worry about it. Within two generations, the remaining humans will be mainly concerned with surviving in a wrecked ecosystem as best they can.
I turned to Academic Torrents, a platform widely used by researchers to share datasets. There, I downloaded a large NSFW dataset often cited in AI research. I unzipped the file in my Google Drive to begin preprocessing. A few days later, my entire Google account was banned — without warning, without explanation, and without any clear path to appeal.
Because in research circles, it is a good idea — or at least standard practice. Academic Torrents is a public data-sharing platform run for researchers, and the dataset is cited in peer-reviewed AI papers and used in university projects.
I wasn’t browsing porn — I was benchmarking an on-device NSFW detection model. I never reviewed every file; my workflow was automated preprocessing. The only “mistake” was unzipping it in Google Drive, not realizing Google scans those files automatically and can flag them without context.
If Google truly cares about stopping harmful content, why not give researchers the exact file names so they can verify and have dataset maintainers remove it? That’s how you improve the data and prevent future issues — assuming it was a violation at all and not a false positive.
The real issue is that whether it’s for research, unintentional, or a false positive, there’s no fair process outside of Google to resolve it. Instead of an immediate, irreversible account ban, they could quarantine the suspected files while preserving access to the rest of the account. That would protect both children and innocent users.
I'd like to share other historic projects which I had a hard time searching for and they seem to be gone from most search engines (at the time I searched for them anyway):
https://archives.darenet.org/ (not available)
https://arsiv.behroozwolf.net/index.php (it seems to be working)
Those are software repositories for a lot of IRC related projects from clients to servers and bots, etc.
https://web.archive.org/web/*/https://www.kegel.com/c10k.htm...