Hardware improvements are easier to quantify and progress naturally comes in incremental steps.
Software however especially from UX point of view, is more likely to be more or less ready at some point. Any improvements are marginal and subjective. What are the large UX teams at Apple going to do if not redesigns for the sake of redesigning? I wish it would happen, but it’s hard to imagine Apple shipping an annual OS release without noticeable visual changes.
I agree. To be more clear, that $60 is an estimate for a small configuration and includes serverless infrastructure to process 500,000 requests per month, plus storage, including a 20gb sql database and 100gb of object storage to serve video and images. More ideal for an application. You run the app in a container and only get charged for the requests, the sql database is persistent, so that cost $20/month and object storage with egress is about $10/month.
Let me describe my setup, so that you can compare. I use a Contabo VPS for around 5 USD month to host my Wagtail (django-based) site. The DB also runs on the same infra and since it's SQLite I can back it up externally.
I probably wouldn't be able to handle 0.5M requests, but I am nowhere near getting them. If I start approaching such numbers I'll consider an upgrade.
Check out Wagtail if you'd like to have even more batteries included for your site, it was a delight building my site with it:
Thank you for sharing your setup, I will certainly examine it and compare a bit later. I know my setup is a bit over the top, but it is the easiest to learn, since I live in gcp everyday. I certainly don't expect the .5m traffic, but that is one of the lower tiers for cloud run, serverless execution service. This is just a poc to get my fingers dirty with the MVT pattern.
My similar workflow within Claude Code when it gets stuck is to have it consult Gemini. Works either through Gemini CLI or the API. Surprisingly powerful pattern because I've just found that Gemini is still ahead of Opus in architectural reasoning and figuring out difficult bugs. https://github.com/raine/consult-llm-mcp
You don't want to run tests after every file change, because that will distract Claude from finishing whatever it's doing and add noise to the context window. Of course the tests will be broken if Claude hasn't finished the full change yet.
Running tests makes most sense on the Stop hook event, but personally I've found CLAUDE.md instruction of "Run `just check` after changes" to be effective enough. The Stop hook has the issue that it will run the checks every time Claude stops responding, even after without any changes.
Won't the LSP distract Claude too? I am trying to think of ways to make Claude faster at iterating by reducing tool calls. That always seems to be a bottleneck when it's doing tons of back-and-forth with tool calls.
Exactly. This is why the workflow of consulting Gemini/Codex for architecture and overall plan, and then have Claude implement the changes is so powerful.
Have used Alfred for 10+ years at this point. Some colleagues are hyped about Raycast, but to me the pricing model is a joke. Pay (monthly) for AI - how about I bring my own API key? Pay (again, monthly) for unlimited clipboard history - lol. Free plan, "Free, forever". Yeah, until it isn't.
Alfred isn't the shiniest thing anymore but it's stood the time remarkably well, something I value very highly for tools as central to my workflow as Alfred.
reply