I will not use WhatsApp any more than I can help it. And it already has Meta AI. I might try it once you support other clients.
Are you going to develop all your own integrations or can users contribute? What are some things you envision users doing asynchronously? I haven't feel the need for an asynchronous assistant, probably because I do not find existing models reliable enough. The most async I go is with coding agents and they need hand-holding too.
Fully understand the WhatsApp part. Do you use any other communicators? Discord, Signal or something else? We are looking for more "interfaces".
When it comes to async that's exactly what we are trying to "solve". Right now models are built in a way that they expect tool results right after tool calls.
For tooling attached to it right now we are using Pipedream integrations and are planning to move to an open source public solution configurable by users so u can set whatever you want.
So imagine you have a single chat interface that steers a fleet of other agents in the background for code. But not by handing off the memory but navigating the tasks in an async way.
For me this cancel and restart thing e.g. in ChatGPT is so annoying. Plus most chat tools don't even get your second message if you send it before they finish, so you're locked into waiting. And in this coyote approach I can just add more context if I forgot something and it picks it up without cancelling what's already running. It' s more like talking to human.
Are you going to develop all your own integrations or can users contribute? What are some things you envision users doing asynchronously? I haven't feel the need for an asynchronous assistant, probably because I do not find existing models reliable enough. The most async I go is with coding agents and they need hand-holding too.