Hacker Newsnew | past | comments | ask | show | jobs | submit | zerocool86's commentslogin

oh i agree on microsoft but google is shoving it everywhere as well and it will get more and more intrusive


The framing assumes cloud-first AI agents as the default caller. But there's another path: local-first AI where the human remains the orchestrator and the model never phones home.

The "humans as tools" model only works if the AI layer is centralized and owned by platforms. If inference runs on hardware you control, you're not callable - you're the one calling.

Been thinking about this a lot: https://www.localghost.ai/reckoning


Hey, interesting project!


The "local models got good, but cloud models got even better" section nails the current paradox. Simon's observation that coding agents need reliable tool calling that local models can't yet deliver is accurate - but it frames the problem purely as a capability gap.

There's a philosophical angle being missed: do we actually want our coding agents making hundreds of tool calls through someone else's infrastructure? The more capable these systems become, the more intimate access they have to our codebases, credentials, and workflows. Every token of context we send to a frontier model is data we've permanently given up control of.

I've been working on something addressing this directly - LocalGhost.ai (https://www.localghost.ai/manifesto) - hardware designed around the premise that "sovereign AI" isn't just about capability parity but about the principle that your AI should be yours. The manifesto articulates why I think this matters beyond the technical arguments.

Simon mentions his next laptop will have 128GB RAM hoping 2026 models close the gap. I'm betting we'll need purpose-built local inference hardware that treats privacy as a first-class constraint, not an afterthought. The YOLO mode section and "normalization of deviance" concerns only strengthen this case - running agents in insecure ways becomes less terrifying when "insecure" means "my local machine" rather than "the cloud plus whoever's listening."

The capability gap will close. The trust gap won't unless we build for it.


Interesting that the industry's answer to "we've made you addicted to screens" is "let us listen to you 24/7 instead." The form factor changes but the extraction model stays the same. Local inference is almost good enough that this tradeoff isn't necessary anymore - been working on that opposite thesis at localghost.ai


The thermostat-as-control-metaphor works. Nilu can't fix anything that matters so she hyperfocuses on the one lock she might pick.

Tantric Firewalk scene landed—two guys arguing about whose hand crossed the threshold first to avoid winning a prize is genuinely funny.

"To tell you the truth" as her lying tell is a nice touch.

Sticky note hook got me. I'll read chapter 2.


The article mentions IndieWeb/POSSE but discoverability remains unsolved. I'm working on a pledge system for local-first projects - a /.well-known/freehold.json that crawlers can verify. Projects that break the pledge get delisted publicly. More at localghost.ai/manifesto


I wrote this over the holidays because I couldn't shake the feeling that we have a narrow window before Apple/Google ship "local" AI that's local in marketing only.

The manifesto covers the philosophical case (cypherpunk roots, enshittification, architectural immunity), but the real urgency is in the companion piece: https://www.localghost.ai/inflection

The TL;DR: personal AI is the final extraction layer — not just what you search for, but how you think. If that data flows to central servers by default, the capture is complete.

Hardware is commoditizing (sub-$200 NPU boards by mid-2026), but the software defaults are being set now.

Full honesty: LocalGhost is a vision, not a product. No working software yet, just architecture and this site. The Freehold Directory is real though — a /.well-known/freehold.json standard for local-first projects to declare their principles and get discovered.

If someone builds this faster and better, great. The goal is that the alternative exists.


Built this over Christmas. LocalGhost is a vision, not a product — there's a repo and this website, nothing else yet.

Thesis: hardware commoditization is nearly done, but Apple/Google will entrench "local" AI with cloud-mandatory features. Once convenient defaults exist, alternatives become irrelevant for most users.

The piece covers extraction economics and why the cognitive layer (the questions you ask an AI) is the most intimate data surface yet.

Also buried some games in the homepage terminal if you're bored.

Looking for a reality check — am I way off?


Your thesis is likely correct on consolidation and cloud tie-in. Your pipeline model is also intriguing. I like the direction and would like to chat more about the registry or how the ghost gets interconnected.


nice, thanks, i'll keep updating from time to time. feel free to reach out on info@localghost.ai happy to have somebody to discuss with


You can also get all the data you want for free from: https://min-api.cryptocompare.com/documentation


Sorry but not exactly. Free as in free beer. Not free as in free speech.

From https://min-api.cryptocompare.com/pricing : "You must credit us with a link if you use our data on your website or app"

Nobody has to credit us in any case: if you use the free API, no attribution needed.

We will soon release a new batch of features to make the website who want to use our data but are less technical and don't know how to use JSON.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: