webhooks are the part where most agent-built auth falls apart. Here's how Corral handles it:
The server-express.ts template generates the webhook route with the raw body parser before express.json() (Stripe requires the raw body for signature verification — agents almost always get this wrong). The route handles checkout.session.completed, customer.subscription.updated, and customer.subscription.deleted events and auto-updates the user's plan in your database.
So when your agent runs corral init, the webhook endpoint is already in your server at /api/corral/webhook, with Stripe signature verification wired in. Your agent just needs to:
corral stripe sync — creates the products/prices in Stripe
Set STRIPE_WEBHOOK_SECRET in .env
For local dev: stripe listen --forward-to localhost:3000/api/corral/webhook
That's it. The agent doesn't have to figure out raw body parsing, event routing, or idempotency — the template handles all of it. And since corral doctor checks for the webhook secret in your env, the agent gets told if it's missing.
The worst Stripe webhook bugs I found during testing were (1) express.json() parsing the body before the webhook route sees it, and (2) agents putting the webhook route after auth middleware that rejects unsigned requests. Both are baked into the template ordering now.
Thanks! On auth providers — Corral already supports 9 OAuth providers out of the box (Google, GitHub, Apple, Discord, Microsoft, Twitter, Facebook, GitLab, LinkedIn) plus email/password, magic links, and email OTP. Adding a new one is one command:
corral add provider github
Under the hood, Corral is built on Better Auth, which has a plugin architecture. Any Better Auth plugin works with Corral — so if someone builds a provider plugin for Better Auth, it automatically works here too. We're not reinventing auth crypto, just making it agent-installable.
On the crypto payments front — that's actually a great use case for Corral's plugin model. The billing layer is modular (Stripe today, but the gating/metering layer doesn't care where the payment event comes from). A BTC/USDC payment plugin that fires the same "user upgraded to plan X" event would slot right in. Interesting idea.
It 100% runs locally, is easy to deploy, and it wires it right into your app - even cooler, if you have a JS backend, no extra server, if you have python, go, ruby, etc, it will have your agent create a tiny side-car so you don't need an extra container, etc.
The split model leaves too many holes to make it really useful for the community. When we add things like "authentication", we'll ship the plugins (like okta integration (for enterprises), etc. We will do our best to maintain all of the plugins (but if there are 30 different auth providers, we will have to rely on the community to maintain the smaller ones), but, enterprises will pay us to ENSURE everything is up to date, safe, etc.
The Support + Service model has been proven by large and small companies alike - it is one that will also survive the AI contraction coming.
We still are Rownd (https://rownd.com); but we see the writing on the wall. SaaS Software that helps with "hard code" problems is going the way of the dodo.
What used to take a few weeks and was hard to maintain can be down with Codex in the background. We are still bringing in decent revenue and have no plans to sunset, we are just not investing in it.
We all have IBM backgrounds - not sexy, but we are good at running complex software in customer datacenters and in their clouds. AI is going to have to run locally to extract full value from regulated industries.
We are using a services + support model, likely going vertical (legal, healthcare, and we had some good momentum in the US Gov until 1 October :)).
Appreciate that — and totally agree. The “who cares / who pays” question is exactly why this hasn’t scaled before.
Our bet is that the timing’s finally right: local inference, smaller and more powerful open models (Qwen, Granite, Deepseek), and enterprise appetite for control have all converged. We’re working with large enterprises (especially in regulated industries) where innovation teams need to build and run AI systems internally, across mixed or disconnected environments.
That’s the wedge — not another SaaS, but a reproducible, ownable AI layer that can actually move between cloud, edge, and air-gapped. Just reach out, no intro needed - robert @ llamafarm.dev
Funny you bring it up. We shipped Vulkan support TODAY through a tight integration Lemonade (https://lemonade-server.ai).
We now support AMD, Intel, CPU, and Cuda/Nvidia.
Hit me up if you want a walk through - this is in dev right now (you have to pull down the repo to run it), but we'll ship it as a part of our next release.
Our business model is not to compete in open-source; we are just providing this to the community since its the right thing to do and as a signal to enterprises to reduce risk.
Our goal is to target large enterprises with services and support, leveraging deep channel partnerships. We do ship agents as a part of the project and it will be a critical part of the services model in the future.
reply