I've been using Pi day to day recently for simple, smaller tasks. It's a great harness for use with smaller parameter size models given the system prompt is quite a bit shorter vs Claude or Codex (and it uses a nice small set of tools by default).
That's interesting; I've found Pi really shines for rapid prototyping. Balancing minimalism and functionality is tricky, but it sounds like they're nailing it with these constraints.
For local models I've been trying it with GLM-4.7-Flash and the new LFM2 24B model. I'm excited to try it with the new Qwen3.5 models that came out today as well.
It's available (with tool parsing, etc.): https://ollama.com/library/glm-4.7-flash but requires 0.14.3 which is in pre-release (and available on Ollama's GitHub repo)
The gpt-oss weights on Ollama are native mxfp4 (the same weights provided by OpenAI). No additional quantization is applied, so let me know if you're seeing any strange results with Ollama.
Most gpt-oss GGUF files online have parts of their weights quantized to q8_0, and we've seen folks get some strange results from these models. If you're importing these to Ollama to run, the output quality may decrease.
We did consider building functionality into Ollama that would go fetch search results and website contents using a headless browser or similar. However we had a lot of worries about result quality and also IP blocking from Ollama creating crawler-like behavior. Having a hosted API felt like a fast path to get results into users' context window, but we are still exploring the local option. Ideally you'd be able to stay fully local if you want to (even when using capabilities like search)
Amazing work. This model feels really good at one-off tasks like summarization and autocomplete. I really love that you released a quantized aware training version on launch day as well, making it even smaller!
Working on adding tool calling support to Magistral in Ollama. It requires a tokenizer change and also uses a new tool calling format. Excited to see the results of combining thinking + tool calling!
reply