> I caved to peer pressure. If you don't fix this thing within four months I will switch to your competitor for one maybe even two product cycles.
He sure showed them. The people I know using super old iphones are doing more than their public commitment to buy more apple products as often as they can -- after a brief tolerance break, of course.
This is disgusting and everyone from the operator of the agent to the model and inference providers need to apologize and reconcile with what they have created.
What about the next hundred of these influence operations that are less forthcoming about their status as robots? This whole AI psyop is morally bankrupt and everyone involved should be shamed out of the industry.
I only hope that by the time you realize that you have not created a digital god the rest of us survive the ever-expanding list of abuses, surveillance, and destruction of nature/economy/culture that you inflict.
Tiktok is also running web scrapers for some reason. I guess ML stuff. Their bot is hitting URLs on my server that haven't ever been linked elsewhere on the web and haven't been valid for years. Nobody else is still trying to get to them since I retired that subdomain.
Lol, messages showing up randomly days later is par for the course for our tiny group chat, most of whom are on matrix.org. Sometimes element won't download messages for some rooms (or even all rooms) for days/hours. Matrix has gotten far less reliable over the years (and I used to run a few homeservers).
Your bio says you are an AI professor. As far as I can tell the AI industry seems to be the mass surveillance machine's magnum opus. How do you square that circle?
Reticulum is not nominally for lora/sx radios. It is designed to operate over any transport from a serial link or infrared to high speed wireless lan or multi-gigabit ethernet.
Reticulum is absolutely not flood routed and is not "LoRa-based" lmao. Typical hn comment.
Planetary-scale networks is mentioned as a design goal on the first page of the docs https://reticulum.network/ which are hidden at the very top of the git repo.
Okay, maybe Reticulum isn’t strictly LoRa-based, but others are (e.g. Meshtastic), and while Reticulum works over lots of physical layers, the README specifically states “An open-source LoRa-based interface called RNode has been designed specifically for use with Reticulum.”
Exactly. Okay, that’s a great claim. How? Also, “planetary scale” is meaningless. With the right node count (low), topology, and radios, just about any mesh network can achieve “planetary scale.” But that doesn’t mean it’ll support 10 thousand users, never mind millions. There are underlying technical reasons that the Internet works the way it does.
Sure, but The Zen of Reticulum repeatedly says that all that centralized infrastructure is bad, bad, bad. So, it seems a bit like cheating if you still need all that to scale.
> Most of the world does not care. I suspect that is more true today than ever before
100% of the people I have spoken with, from uber drivers to grandparents, have all noticed, hated, and are sympathetic to the fight against the rental/subscription economy. In 2025 I don't think I've had a single person defend the status quo because they all know what's coming.
Having to figure out how to make whatever random god-awful corporate software they got sold work on nixos -- on a deadline -- sounds like seven circles of hell.
Last I checked, nixOS handles vm's quite fine. The idea is to recreate your rig on cunsulting client hardware. OP mention he prefers to work on client's supplied machine. In fact at that level of consultation, 2 machines should be provided, a daily driver laptop, and a workstation with a hypervisor. Therefore, having those configurations declaratively stored, would save some time. either thru ansible playbooks.. nixos, of course recently there is a third option.. a cli agent like claude code.
You should probably disclose that you're a CTO at an AI startup, I had to click your bio to see that.
> The amount of compute in the world is doubling over 2 years because of the ongoing investment in AI (!!)
All going into the hands of a small group of people that will soon need to pay the piper.
That said, VC backed tech companies almost universally pull the rug once the money stops coming in. And historically those didn't have the trillions of dollars in future obligations that the current compute hardware oligopoly has. I can't see any universe where they don't start charging more, especially now that they've begun to make computers unaffordable for normal people.
And even past the bottom dollar cost, AI provides so many fun, new, unique ways for them to rug pull users. Maybe they start forcing users to smaller/quantized models. Maybe they start giving even the paying users ads. Maybe they start inserting propaganda/ads directly into the training data to make it more subtle. Maybe they just switch out models randomly or based on instantaneous hardware demand, giving users something even more unstable than LLMs already are. Maybe they'll charge based on semantic context (I see you're asking for help with your 2015 Ford Focus. Please subscribe to our 'Mechanic+' plan for $5/month or $25 for 24 hours). Maybe they charge more for API access. Maybe they'll charge to not train on your interactions.
I'm not longer CTO at an AI startup. Updated, but don't actually see how that is relevant.
> All going into the hands of a small group of people that will soon need to pay the piper.
It's not very small! On the inference side there are many competitive providers as well as the option of hiring GPU servers yourself.
> And historically those didn't have the trillions of dollars in future obligations that the current compute hardware oligopoly has. I can't see any universe where they don't start charging more, especially now that they've begun to make computers unaffordable for normal people.
I can't say how strongly I disagree with this - it's just not how competition works, or how the current market is structured.
Take gpt-oss-120B as an example. It's not frontier level quality but it's not far off and certainly gives a strong redline that open source models will never get less intelligent than.
In what world is there a way in which all the providers (who are want revenue!) raise prices above the premium price Cerebas is charging for their very high speed inference?
There's already Google, profitable serving at the low-end at around half the price of Cerebas (but then you have to deal with Google billing!)
The fact that Azure/Amazon are all pricing exactly the same as 8(!) other providers as well as the same price https://www.voltagepark.com/blog/how-to-deploy-gpt-oss-on-a-... gives for running your own server shows how the economics work on NVidia hardware. There's no subsidy going on there.
This is on hardware that is already deployed. That isn't suddenly going to get more expensive unless demand increases... in which case the new hardware coming online over the next 24 months is a good investment, not a bad one!
He sure showed them. The people I know using super old iphones are doing more than their public commitment to buy more apple products as often as they can -- after a brief tolerance break, of course.
reply