Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ah, that's interesting! It could be related to the improvements they seem to have made in the area of "overreliance". According to OpenAI's paper (https://arxiv.org/pdf/2303.08774.pdf):

> Overreliance occurs when users excessively trust and depend on the model, potentially leading to unnoticed mistakes and inadequate oversight.

> At the model-level we’ve also made changes to address the risks of both overreliance and underreliance. Weve found that GPT-4 exhibits enhanced steerability which allows it to better infer users intentions without extensive prompt tuning.

> To tackle overreliance, we’ve refined the model’s refusal behavior, making it more stringent in rejecting requests that go against our content policy, while being more open to requests it can safely fulfill. One objective here is to discourage users from disregarding the model’s refusals.

> However, it’s worth noting that GPT-4 still displays a tendency to hedge in its responses.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: