Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Another aspect could be that OpenAI doesn’t want their product to produce offensive output. It safety may just be a euphemism for “reputation preserving.”


Yeah that would be another way of looking at it. Though with who are the trying to preserve their reputation? Seems it's the same crowd that thinks the reputation of an ML model lives and dies in whether you can get it to say something they don't like. So in a way it's kind of circular, laypeople are worrying about the wrong kind of "safety" so that's what gets optimized for.


Hmm, it is sort of easy to sort of concern troll about reputation stuff. But for example, if a company (naturally conservative entity) is thinking about implementing an AI solution, they might be worried that if they bought in on a solution that is perceived to be somehow evil by their customers, it might hurt their reputation with those customers.

I mean I’m speculating that OpenAI might worry (perhaps incorrectly) that a company might expect (perhaps incorrectly) that customers might (perhaps incorrectly) perceive some bit of software as, basically, AI-non-grata. So there are multiple levels of people managing possible incorrect perceptions. But it seems not totally crazy, right?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: