Hacker Newsnew | past | comments | ask | show | jobs | submit | satellite2's commentslogin

Interesting. When it's the state I think the overwhelming opinion is that predictive policing is dangerous but when it's a private company we actually want it to enforce it?


They could not be held accountable to warn her if they had not done the analysis. They did. Their organizational conclusion was that it was potentially an unsafe trip. Shit, they could have just cancelled the ride dynamically and re-assigned her. Why wouldn’t they do that? It’d probably be more expensive. Maybe they’d get more cancelled rides. Maybe this woman wouldn’t have been raped by an agent of Uber selected for and sent to her by them.


Wouldn't they then expose themselves to discrimination and loss of revenue lawsuits from targeted drivers?


It depends. Are the inputs to the algorithm themselves discriminatory? If so, then yes that would be appropriate. But that is a different conversation. They determined the passenger may be unsafe and did nothing.

Mind you, these companies work very hard for us to not know how they match A to B, usually so we don’t notice things like their disregard for safety.


The inputs wouldn’t even matter; the inputs could even be above reproach but if there were disparate impacts in terms of outcomes, the case for liability could be made.


Maybe, but they’re clearly liable for not using the information.


Aren't you just moving the problem a little bit further? If you can't trust it will implement carefully specified features, why would you believe it would properly review those?


It's hard to explain, but I've found LLMs to be significantly better in the "review" stage than the implementation stage.

So the LLM will do something and not catch at all that it did it badly. But the same LLM asked to review against the same starting requirement will catch the problem almost always

The missing thing in these tools is that automatic feedback loop between the two LLMs: one in review mode, one in implementation mode.


I've noticed this too and am wondering why this hasn't been baked into the popular agents yet. Or maybe it has and it just hasn't panned out?


Anecdotaly I think this is in Claude Code. It's pretty frequent to see it implement something, then declare it "forgot" a requirement and go back and alter or add to the implementation.


AFAICT this is already baked into the GitHub Copilot agent. I read its sessions pretty often and reviewing/testing after writing code is a standard part of its workflow almost every time. It's kind of wild seeing how diligent it is even with the most trivial of changes.


You have to dump the context window for the review to work good.


Exactly, his whole tirade felt extraordinarily far fetched, sketchy if not outright racist.


Well it's 2025, we've just spent the better half of the year discussing the bitter lesson. It seems clear solving more general problem is key to innovation.


Hardware is not like software. A general purpose humanoid cleaning robot will be superior to a robot vacuum but it will always cost an order of magnitude more. This is different from software where the cost exponentially decreases and you can do the computer in the cloud.


I'm not sure advancements in AI and advancements in vacuum cleaners are at similar stages in terms of R&D. I'd be very wary of trying to apply lessons from one to the other.


Alternatively:

  enclave, err := secret.GetEnclave()
  // err contains whether the platform doesn't support it
  enclave.Do(f)


Incredible teamwork: OOP dismantles society in paragraph form, and OP proudly outsources his interpretation to an LLM.. If this isn’t collective self-parody, I don’t know what it is.


Are those really standardized in the US?

Where I live the condition vary widely. And basically the switching costs might easily dominate the total costs if you move/sell.

I've found that taking this into account it was better to trade a few places in term of interests for better conditions.


Yes, extremely, especially for confirming loans: https://singlefamily.fanniemae.com/originating-underwriting/...

Patrick McKenzie (https://news.ycombinator.com/user?id=patio11) has a great deep dive on this: https://www.bitsaboutmoney.com/archive/mortgages-are-a-manuf...

Closing/switching costs are certainly a consideration still, but the "Truth in Lending Act" (TILA) made it easier to compare the all-in cost by providing a standardized APR number, which is what the dashboard focuses on.


* conforming loans


Dynamic led plate that are totp. Where you can determine who is who on which date only with central access.


Of course. But what if the holding lives in a country that don't enforce this (or is too weak to). Then all the subsidiaries are really sovereign from the host country perspective.

It seems the solution is ages old. Don't have the holding incorporated in an empire...


How would this work in practice? If the empire wants to get at your data, why do you think it would shy away from pressuring a country so weak that it can't afford to enforce this on their companies?


Then the empire just says that they want the data or you won't be allowed to operate in the empire, which would be bad for profits and anger shareholders.


I'm not sure about this.

If the job market is representative of this then we can see that as both sides uses it and are getting better it's becoming an arms race. Looking for a job two years ago using ChatGPT was the perfect timing but not any more. The current situation is more applications per position and thus longer decision time. The end result is that the duration of unemployment is getting longer.

I'm afraid the current situation, which as described in the article is favorable to customers, is not going to last and might even reverse.


In the job market, information asymmetry would mainly be at play during comp negotiations, not during the interview process


for people who cheat, it is still the ideal time to look for a job before companies return to in-person hiring. i interview nowadays and it is crazy how ubiquitous these cheating tools are.


We have proctored testing centers (Pearson Vue etc) if companies wanted trusted remote interviews.


We've decided to do onsites for all hires, in part to combat this.


Good - it costs the company more $$$ and cheating is still easy as hell.

We have proof that the "Anal beads chess cheating" accusations could have been legit (https://github.com/RonSijm/ButtFish). You think that people won't do even easier cheating for a chance at a 500K+ FAANG job?

Also, if you want the best jobs at Foundation model labs (1 million USD starting packages), they will reject you for not using AI.


> they will reject you for not using AI.

Well, I don't work for a foundation model lab. But actually, I'm happy for folks to use AI to augment their skills.

I also want to make sure that they can use it well and aren't just a mouthpiece for ChatGPT. Having them come in is one way to verify that.


low quality comment

> they will reject you for not using AI.

False - many biglabs will explicitly ask you to not use AI in portions of their interview loop.

> We have proof that the "Anal beads chess cheating" accusations could have been legit (https://github.com/RonSijm/ButtFish). You think that people won't do even easier cheating for a chance at a 500K+ FAANG job?

Just nonsense.

> 1 million USD starting packages

False.


Same, between the interview cheating and AI slop resumes... hiring has become a dreadful process.


Yeah, hiring was always hard but has just become mind bogglingly difficult.


why are the cheating tools even necessary?


say more in your question?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: