Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks for the reply. I lost you in this part, Great questions on scale: the whole way we designed our engine is that in the happy path, we actually use very little LLMs. The agent runs deterministically, only checking at various critical spots if anomalies occurred (if it does, we fallback to computer use to take it home)

I assume you send screenshot to claude for nest action to take, how are you able to reduce this exact step by working deterministically? What is the is deterministic part and how you figure it out?



So what I meant is this: When you run our Cyberdesk agent the first time, it runs with the computer use agent. But then once that’s complete, we cache every exact step it took to successfully complete that task (every click, type, scroll) and then simply replay that the next time.

But during that replayed action, we do bring in smaller LLMs to just keep in check to see if anything unexpected happened (like a popup). If so, we fall back to computer use to take it home.

Does that make sense? At the end of the day, our agent compiles down to Pyautogui, with smart fallback to the agent if needed.


Hi,yes. It makes sense now. To cache the steps and reply. Very efficient strategy than to run the step each time using LLM




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: