Balance Buckets helps you quickly answer the question: “How much is safe to spend right now?”
You setup a few buckets then drop in your current balance. It immediately shows you what’s left. It’s local-first (just localStorage), no bank login, no account linking, and no transaction import rabbit hole. The goal is clarity in under a minute, not another finance app that demands setup overhead.
I built Balance Buckets because most small business bank accounts don't have a buckets or envelope saving feature. I wanted a dead-simple tool that helps me see where my money is. Define buckets (fixed dollars or percents), track what’s funded vs underfunded.
I recently had an entire meal at Chili’s comped by the manager, because I waited an hour for food. I guess their system flagged it, or they just noticed, because I didn’t complain. I was hanging with my grandson.
I tipped on the full amount but we had to get the manager again to figure out how. I was going to Venmo her but the manager just sent the $0.00 bill to the table.
I also cut my own hair, but sometimes I’m lazy and just hit up the Barber shop.
She charges me $15! I tip +$25 and it’s still a cheap haircut.
My haircut has to be one of the simplest around, but 9 out of 10 stylists will leave me fixing it myself later. Once I paid $50+tip for the same cut at a swanky joint and STILL went home and fixed it. She doesn’t know what she’s worth.
Ive been working on Peen, a CLI that lets local Ollama models call tools effectively. It’s quite amateur, but I’ve been surprised how spending a few hours on prompting, and code to handle responses, can improve the outputs of small local models.
Current LLMs use special tokens for tool calls and are thoroughly trained for that, nearing almost 100% correctness these days, allowing multiple tool calls per single LLM response. That's hard to beat with custom tool calls. Even older 80B models struggle with custom tools.
Peen, an experimental “Claude Code”-style CLI that talks to Ollama over HTTP and can run shell commands so tiny/local/free models can inspect and modify projects.
If you're talking about the async agent described in the post (already regretting calling it that, let's call it orchestrator agent instead), looks like https://code.claude.com/docs/en/agent-teams is trying to achieve that
Opus[1m] on the API with teams is a very very expensive but very interesting thing to play with, if you're willing to burn a $100 playing with what "state of the art" looks like - I suspect this is it.
That is my experience from a year ago but I no longer feel that way. I write a few instructions, guide an agent to create a plan, and rarely touch the code myself. If I don’t like something, I ask the agent to fix it.
Agree, there was a huge step change with Claude Code + Opus 4.5 (maybe 4.6 is even better?). Anyone dealing with earlier models as their basis should probably try the newest stuff and see if it changes their minds.
reply