Interesting. When it's the state I think the overwhelming opinion is that predictive policing is dangerous but when it's a private company we actually want it to enforce it?
They could not be held accountable to warn her if they had not done the analysis. They did. Their organizational conclusion was that it was potentially an unsafe trip. Shit, they could have just cancelled the ride dynamically and re-assigned her. Why wouldn’t they do that? It’d probably be more expensive. Maybe they’d get more cancelled rides. Maybe this woman wouldn’t have been raped by an agent of Uber selected for and sent to her by them.
It depends. Are the inputs to the algorithm themselves discriminatory? If so, then yes that would be appropriate. But that is a different conversation. They determined the passenger may be unsafe and did nothing.
Mind you, these companies work very hard for us to not know how they match A to B, usually so we don’t notice things like their disregard for safety.
The inputs wouldn’t even matter; the inputs could even be above reproach but if there were disparate impacts in terms of outcomes, the case for liability could be made.
Aren't you just moving the problem a little bit further? If you can't trust it will implement carefully specified features, why would you believe it would properly review those?
It's hard to explain, but I've found LLMs to be significantly better in the "review" stage than the implementation stage.
So the LLM will do something and not catch at all that it did it badly. But the same LLM asked to review against the same starting requirement will catch the problem almost always
The missing thing in these tools is that automatic feedback loop between the two LLMs: one in review mode, one in implementation mode.
Anecdotaly I think this is in Claude Code. It's pretty frequent to see it implement something, then declare it "forgot" a requirement and go back and alter or add to the implementation.
AFAICT this is already baked into the GitHub Copilot agent. I read its sessions pretty often and reviewing/testing after writing code is a standard part of its workflow almost every time. It's kind of wild seeing how diligent it is even with the most trivial of changes.
Well it's 2025, we've just spent the better half of the year discussing the bitter lesson. It seems clear solving more general problem is key to innovation.
Hardware is not like software. A general purpose humanoid cleaning robot will be superior to a robot vacuum but it will always cost an order of magnitude more. This is different from software where the cost exponentially decreases and you can do the computer in the cloud.
I'm not sure advancements in AI and advancements in vacuum cleaners are at similar stages in terms of R&D. I'd be very wary of trying to apply lessons from one to the other.
Incredible teamwork: OOP dismantles society in paragraph form, and OP proudly outsources his interpretation to an LLM.. If this isn’t collective self-parody, I don’t know what it is.
Closing/switching costs are certainly a consideration still, but the "Truth in Lending Act" (TILA) made it easier to compare the all-in cost by providing a standardized APR number, which is what the dashboard focuses on.
Of course. But what if the holding lives in a country that don't enforce this (or is too weak to). Then all the subsidiaries are really sovereign from the host country perspective.
It seems the solution is ages old. Don't have the holding incorporated in an empire...
How would this work in practice? If the empire wants to get at your data, why do you think it would shy away from pressuring a country so weak that it can't afford to enforce this on their companies?
Then the empire just says that they want the data or you won't be allowed to operate in the empire, which would be bad for profits and anger shareholders.
If the job market is representative of this then we can see that as both sides uses it and are getting better it's becoming an arms race. Looking for a job two years ago using ChatGPT was the perfect timing but not any more. The current situation is more applications per position and thus longer decision time. The end result is that the duration of unemployment is getting longer.
I'm afraid the current situation, which as described in the article is favorable to customers, is not going to last and might even reverse.
for people who cheat, it is still the ideal time to look for a job before companies return to in-person hiring. i interview nowadays and it is crazy how ubiquitous these cheating tools are.
Good - it costs the company more $$$ and cheating is still easy as hell.
We have proof that the "Anal beads chess cheating" accusations could have been legit (https://github.com/RonSijm/ButtFish). You think that people won't do even easier cheating for a chance at a 500K+ FAANG job?
Also, if you want the best jobs at Foundation model labs (1 million USD starting packages), they will reject you for not using AI.
False - many biglabs will explicitly ask you to not use AI in portions of their interview loop.
> We have proof that the "Anal beads chess cheating" accusations could have been legit (https://github.com/RonSijm/ButtFish). You think that people won't do even easier cheating for a chance at a 500K+ FAANG job?