That is fantastic. I'm building a small macOS SwiftUI client with llama cpp built in, no server-client model, and it's already so useful with models like openhermes chat 7B, and fast.
If this opens it to smaller laptops, wow!
We truly live in crazy time. The rate of improvement in this field is off the walls.
Not sure if this is where your head is, but I think there's a lot of value in integrating LLMs directly into complex software. Jira, Salesforce, maybe K8s - should all have an integrated LLMs that can walk you through how to perform a nuanced task in the software.
IMO, for many real business use cases, the hallucinations are still a big deal. Once we have models that are more reliable, I think it makes sense to go down that path - the AI is the interface to the software.
But until we're there, a system that just provides guidance that the user can validate is a good stepping stone - and one I suspect is immediately feasible!
A beginner tutorial is also not used frequently by users, but that doesn't make it a bad investment. I an LLM can help a lot with getting familiar with the tool it could be pretty valuable, especially after a UI rework etc.
That sounds awesome! Can you share any details about how you're working with llama cpp? Is it just via the Swift <> C bridge? I've toyed with the idea of doing this, and wonder if you have any pointers before I get started.
If this opens it to smaller laptops, wow!
We truly live in crazy time. The rate of improvement in this field is off the walls.