Images are on ghcr.io. I code on ARM myself and the runner build image is multi-platform, so it should work. Haven't tested in a while though, let me know how it goes!
Wrangler file format: not planned. We're taking a different approach for config but we intend to be compatible with Cloudflare adapters (SvelteKit, Astro, etc). Assets binding already has the same API. We just need to support _routes.json and add static file routing on top of workers, data model is ready for it.
For D1: our DB binding is Postgres-based, so the API differs slightly. Same idea, different backend.
Hono should just work, it just needs a manual build step and copy paste for now. We will soon host OpenWorkers dashboard and API (Hono) directly on the runner (just some plumbing to do at this point).
I think it would be worth it to keep the D1 compatibility, Sqlite and Postgres have different SQL dialects. Cloudflare has Hyperdrive to keep the connection alive to Postgres/other dbs, what D1/libSql/Turso brings to the table is the ability to run a read/write replica in the machine, this can dramatically reduce the latency.
This is exactly where we see things heading. The trust model is shifting - code isn't written by humans you trust anymore, it's generated by models that can be poisoned, confused, or just pick the wrong library.
We're thinking about OpenWorkers less as "self-hosted Cloudflare Workers" and more as a containment layer for code you don't fully control. V8 isolates, CPU/memory limits, no filesystem access, network via controlled bindings only.
We're also exploring execution recording - capture all I/O so you can replay and audit exactly what the code did.
Production bug -> replay -> AI fix -> verified -> deployed.
Workers that hit limits (CPU, memory, wall-clock) get terminated cleanly with a clear reason. Exceptions are caught with stack traces (at least it should lol), logs stream in real-time.
What's next: execution recording. Every invocation captures a trace: request, binding calls, timing. Replay locally or hand it to an AI debugger. No more "works on my machine".
This makes a lot of sense. Recording execution + replay is exactly what’s missing once you move past simple logging.
One thing I’ve found tricky in similar setups is making sure the trace is captured before side-effects happen, otherwise replay can lie to you. If you get that boundary right, the prod → replay → fix → verify loop becomes much more reliable.
Good use case. For state between invocations, we have KV (key-value with TTL), Storage (S3) and DB bindings (Postgres). Durable Objects not yet but it's on the roadmap.
Wall-clock timeout is configurable (default 30s), CPU limits too. We haven't prioritized long-running tasks or WebSockets yet, but shouldn't be hard to add.
nice, KV + Postgres covers most of our use cases. the TTL on KV is useful for caching auth tokens between invocations without worrying about cleanup.
for long-running tasks we've been using a queue pattern anyway - worker picks up task, does a chunk, writes state to KV, exits. next invocation picks up where it left off. works around timeout limits and handles retries gracefully. websockets would be nice for real-time feedback but polling works fine for now.
will keep an eye on the durable objects progress. that's the main thing missing for stateful agent workflows where you need guaranteed delivery.
The DX is great: simple deployment, no containers, no infra to manage. I build a lot of small weekend projects that I don't want to maintain once shipped. OpenWorkers gives you the same model when you need compliance or data residency.
Thanks for the clarification on CF's V8 patching strategy, that 24h turnaround is impressive and exactly why I point people to Cloudflare when they need production-grade multi-tenant security.
OpenWorkers is really aimed at a different use case: running your own code on your own infra, where the threat model is simpler. Think internal tools, compliance-constrained environments, or developers who just want the Workers DX without the vendor dependency.
Appreciate the work you and the team have done on Workers, it's been the inspiration for this project for years.
Fun fact: I tried K8s early on but found it overkill for my setup, so I stayed on Compose. Will revisit it properly now.