Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But the point of the feature is to write the words the user dictates, isn't it? So if it correctly recognized the word "racist", why would it replace it with "Trump"? The application must know that's not what the user dictated.

The most likely explanation to me is that Apple probably has a feature which automatically replaces expletives so that users don't accidentally send messages with offensive language that was not intended. It's probably a simple substitution which maps known expletives to suitable replacements. A disgruntled developer probably added this substitution and, for whatever reason, it didn't get caught until it hit production. I don't know if this is true, but it seems like a reasonable explanation to me.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: