Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Every api should have its own validation so I don't even see this as a problem.

What is returned from openai should be treated like any other user input.



> Every api should have its own validation so I don't even see this as a problem.

No.

I'm saying, little by little people will rely on OpenAI hypothetically for more and more.

How long until they are calling POST /credit/customer/bank/account and it just randomly goofs the ID/numbers?

A "human" may or may not have made that mistake, where an LLM will never be a 100% perfect trustable entity by design (aka, hallucinations).

Now you're just giving it a way to hallucinate into a JSON request body.


> A “human” may or may not have made that mistake, where an LLM will never be a 100% perfect trustable entity by design (aka, hallucinations).

This is equally true if you swap “human” and “LLM”. Humans, too, are fallible by design, and LLMs (except maybe with exactly fixed input and zero temperature) are generally not guaranteed to make or not make any given error.

Humans are more diverse both across instances and for the same instance at different times (because they have, to treat them as analogous systems [0], continuity with a very large multimodal context windows.) But that actually makes humans less reliable and predictable, not more, than LLMs.

[0] which is probably inaccurate, but...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: