Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>It was a flaw in their model weights (they'd had others before).

Why does that flaw only have a bias against white people? It's not like it's an accident that nobody inside Google noticed before release. That prompt bias was put there by employees at Google.

Think of it the other way, if Gemini had a bias against non-white demographics, would you still brush it off as just a flaw, or as a form of bias/discrimination of employees at Google who code, test and approve this stuff?

Remember when Google's image recognition mistakenly took a photo of two black people and labeled as monkeys? They had hell to pay for that mistake yet if it refuses to acknowledge white people then it's just a innocent whoopsie.



You're conflating two different problems:

- AI companies not training their algorithms on sufficiently diverse data, not tuning their algorithms sufficiently to penalize bias, and not testing it sufficiently to ensure the responses are not biases.

- Someone at an AI company deliberately modifying the system prompt in order to encourage responses of a given type.

Not saying the latter is worse than the former, but they're completely different problems. The xAI problem, in the case, was the latter.


[flagged]


Dude, believe whatever you want. IDGAF.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: