Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The solution that worked great for me - do not use JSON for GPT to agent communication. Use comma separated key=value, or something to that effect.

Then have another pure code layer to parse that into structured JSON.

I think it’s the JSON syntax (with curly braces) that does it in. So YAML or TOML might work just as well, but I haven’t tried that.



Coincidentally, I just published this JS library[1] over the weekend that helps prompt LLMs to return typed JSON data and validates it for you. Would love feedback on it if this is something people here are interested in. Haven’t played around with the new API yet but I think this is super exciting stuff!

[1] https://github.com/jacobsimon/prompting


Looks promising! Do you do retries when returned json is invalid? Personally, I used io-ts for parsing, and GPT seems to be able to correct itself easily when confronted with a well-formed error message.


Great idea, I was going to add basic retries but didn’t think to include the error.

Any other features you’d expect in a prompt builder like this? I’m tempted to add lots of other utility methods like classify(), summarize(), language(), etc


It's harder to form a tree with key value. I also tried the relational route. But it would always messup the cardinality (one person should have 0 or n friends, but a person has a single birth date).


You could flatten it using namespaced keys. Eg.

    {
      parent1: { child1: value }
    }
Becomes one of the following:

    parent1/child1=value
    parent1_child1=value
    parent1.child1=value
..you get the idea.


It's also harder to stream JSON? Maybe I'm overthinking this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: