But in brief, the short-term evolution of LLMs is going to involve something like letting it `eval()` some code to take an action as part of a response to a prompt.
A recent paper, Toolformer: https://pub.towardsai.net/exploring-toolformer-meta-ai-new-t... which is training on a small set of hand-chosen tools, rather than `eval(<arbitrary code>)`, but hopefully it's clear that it's a very small step from the former to the latter.
I’ve been getting very good results from eval on JS written by GPT. It is surprising apt at learning when to query a source like wolframalpha or wikipedia and when to write an inline function.
You can stop it from being recursive by passing it through a model that is not trained to write JavaScript but is trained to output JSON.
But in brief, the short-term evolution of LLMs is going to involve something like letting it `eval()` some code to take an action as part of a response to a prompt.
A recent paper, Toolformer: https://pub.towardsai.net/exploring-toolformer-meta-ai-new-t... which is training on a small set of hand-chosen tools, rather than `eval(<arbitrary code>)`, but hopefully it's clear that it's a very small step from the former to the latter.