Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Indeed. I think a GPT-4o class model, properly prompted, would work just fine today. The trick is, unlike a human, the computer is free to just say "no" without consequences. The model could be aggressively prompted to detect and refuse weird orders. Having to escalate to a human supervisor (who conveniently is always busy doing other things and will come to you in a minute or three) should be sufficient at discouraging pranksters and fraudsters, while not annoying enough to deter normal customer.

(I say model, but for this problem I'd consider a pipeline where the powerful model is just parsing orders and formulating replies, while being sanity-checked by a cheaper model and some old-school logic to detect excessive amounts or unusual combinations. I'd also consider using "open source" model in place of GPT-4o, as open models allow doing "alignment" shenanigans in the latent space, instead of just in the prompts.)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: