> These work well with local LLMs that are powerful enough to run a coding agent environment with a decent amount of context over longer loops.
That's actually super interesting, maybe something I'll try investigate and find the minimum requirements because as cool as they seem, personalized 'skills' might be a more useful use of AI overall.
Nice article, and thanks for answering.
Edit: My thinking is consumer grade could be good enough to run this soon.
The OpenAI GPT OSS models can drive Codex CLI, so they should be able to do this.
I have high hopes for Mistral's Devstral 2 but I've not run that locally yet.