Please help me understand. Is a "skill" the prompt instructing the LLM how to do something? For example, I give it the "skill" of writing a fantasy story, by describing how the hero's journey works. Or I give it the "curl" skill by outputting curl's man page.
Its additional context that can be loaded by the agent as-needed. Generally it decides to load based on the skill's description, or you can tell it to load a specific skill if you want to.
So for your example, yes you might tell the agent "write a fantasy story" and you might have a "storytelling skill" that explains things like charater arcs, tropes, etc. You might have a separate "fiction writing" skill that defines writing styles, editing, consistency, etc.
All of this stuff is just 'prompt management' tooling though and isn't super commplicated. You could just paste the skill content into your context and go from there, this just provides a standardized spec for how to structure these on-demand context blocks.
LLM-powered agents are surprisingly human-like in their errors and misconceptions about less-than-ubiquitous or new tools. Skills are basically just small how-to files, sometimes combined with usage examples, helper scripts etc.
Reminds me a lot of when we simply piped the output of one LLM into another LLM. Seemed profound and cool at first - "Wow, they're talking with each other!", but it quickly became stale and repetitive.
We always hear these stories from the frontier Model companies of scenarios of where the AI is told it is going to be shutdown and how it tries to save itself.
What if this Moltbook is the way these models can really escape?
I don’t know why you were flagged, unlimited execution authority and network effects is exactly how they can start a self replicating loop, not because they are intelligent, but because that’s how dynamic systems work.
I've noticed this as well. I delegate to agentic coders on tasks I need to have done efficiently, which I could do myself and lack time to do. Or on tasks which are in areas I simply don't care much for, for languages which I don't like very much etc
WDYM? I don't want to train a model, only use inference. From what I know it must be much cheaper to buy "normal" ram + a decent CPU vs a GPU with similar amounts of vram.
The bottleneck of the inference is fitting a good enough model into memory. A 80B param model 8bit fp quantization equates to roughly ~90GB ram. So 2x64GB DDR4 sticks is probably the most price efficient solution. The questions is: Is there any model which is capable enough to consistently deal with an agentic workload?
Author here. Not ridiculous at all, the name is a bit of a misnomer. I had tried doing true ASCII but moving data back to the CPU to render it all was too slow. So I opted to recreate them as glyphs that are drawn using signed distance functions, which gets pretty close to looking like real ASCII while still being incredibly performant due to it never leaving the GPU.
reply