Again, they get paid a cut of Google's ad revenue from Safari users. This has one impact on Apple's design choices - Google remains the default search engine.
Notably, this hasn't stopped Apple from introducing multiple anti-tracking technologies into Safari which prevents Google from collecting information from Safari users.
If I open up a new tab in safari it tells me that in the last 30 days Safari prevented 109 trackers from profiling me and that 55% of the sites I use implement trackers. It also tells me that the most blocked tracker is googletagmanager.com across 78 websites
I don’t get why they didn’t compare against BLIS. I know you can only do so many benchmarks, and people will often complain no matter what, but BLIS is the obvious comparison. Maybe BLIS doesn’t have kernels for their platform, but they’d be well served by just mentioning that fact to get that question out of the reader’s head.
BLIS even has mixed precision interfaces. But might not cover more exotic stuff like low-precision ints? So this paper could have had a chance to “put some points on the board” against a real top-tier competitor.
Maybe you want a comparison anyways, but it won't be competitive. On Apple CPUs, SME is ~8x faster than a single regular CPU core with a good BLAS library.
They said in a follow-up comment that they intentionally wrote something ambiguous, so… I don’t know, I wouldn’t waste too many cycles on comments that are deliberately unclear.
Do they speculate about things like “we’re near a school zone, kids are unloading, there might be a kid I’ve never seen behind that SUV?” (I’m legitimately asking I’ve never been in a Waymo).
Asking whether an entity has modeled and evaluated a specific situation, using that evaluation to inform its decisions, is not about subjective experience.
If you're asking whether their training data includes situations like this, and whether their trained model/other pieces of runtime that drive the car include that feature as part of their model, the answer is yes. But not in the way a normal human driver would think about it; many of the details of its decision making process are based on large statistical collections, rather than "I'm in a school zone and need to anticipate children may be obscured and run out into traffic." There are many places where the car needs to take caution without knowing specifically it's within 50 feet of a school zone.
It would be nice to see the video (although maybe there are some privacy issues, it is at a school after all).
Anyway, from the article,
> According to the NHTSA, the accident occurred “within two blocks” of the elementary school “during normal school drop off hours.” The safety regulator said “there were other children, a crossing guard, and several double-parked vehicles in the vicinity.”
So I mean, it is hard to speculate. Probably Waymo was being reasonably prudent. But we should note that this description isn’t incompatible with being literally in an area where the kids are leaving their parents’ cars (the presence of “several double parked cars brings this to mind). If that’s the case, it might make sense to consider an even-safer mode for active student unloading areas. This seems like the sort of social context that humans might have and cars might be missing.
But things speculation. It would be good to see a video.
Something I’ve been sort of wondering about—LLM training seems like it ought to be the most dispatchable possible workload (easy to pause the thing when you don’t have enough wind power, say). But, when I’ve brought this up before people have pointed out that, basically, top-tier GPU time is just so valuable that they always want to be training full speed ahead.
But, hypothetically if they had a ton of previous gen GPUs (so, less efficient) and a ton of intermittent energy (from solar or wind) maybe it could be a good tradeoff to run them intermittently?
Ultimately a workload that can profitably consumer “free” watts (and therefore flops) from renewable overprovisioning would be good for society I guess.
First: Almost anything can be profitable if you have free inputs.
Second: Even solar and wind are not really "free" as the capital costs still depreciate over the lifetime of the plant. You might be getting the power for near-zero or even negative cost for a short while, but the power cost advantage will very quickly be competed away since it's so easy to spend a lot of energy. Even remelting recycled metals would need much less capital investment than even a previous-gen datacentre.
That leaves the GPUs. Even previous gen GPUs will still cost money if you want to buy them at scale, and those too depreciate over time even if you don't use them. So to get the maximum value out of them, you'd want to run them as much as possible, but that contradicts the business idea of utilizing low cost energy from intermittent sources.
Long story short: in might work in very specific circumstances if you can make the numbers work. But the odds are heavily stacked against you because typically energy costs are relatively minor compared to capital costs, especially if you intend to run only a small fraction of the time when electricity is cheap. Do your own math for your own situation of course. If you live in Iceland things might be completely different.
Exactly as you'd expect: they make it possible to run the GPUs more hours in exchange for needing additional capital. Those batteries will have an upfront cost and will depreciate over time. You'll obviously also need more solar panels than before, which also further increases the upfront investment. Also note that now we're already straying away from the initial idea of "consuming free electricity from renewable overprovisioning". If you have solar panels and a battery, you can also just sell energy to the grid instead of trying to make last-gen GPUs profitable by reducing energy costs.
Again: it might work, if the math checks out for your specific source of secondhand GPUs and/or solar panels and/or batteries.
This is a problem with basically all "spare power" schemes: paying for the grid hookup and land on which you situate your thing isn't free, as well as the interest rate cost of capital; so the lower the duty cycle the less economic it is.
> top-tier GPU time is just so valuable that they always want to be training full speed ahead.
I don't think this makes much sense because the "waste" of hardware infrastructure by going from 99.999% duty cycle to 99% is still only ~1%. It's linear in the fraction of forgone capacity, while the fraction of power costs you save from simply shaving off the costliest peaks and shifting that demand to the lows is superlinear.
I do sort of wonder if there’s room in my life for a small attested device. Like, I could actually see a little room for my bank to say “we don’t know what other programs are running on your device so we can’t actually take full responsibility for transactions that take place originated from your device,” and if I look at it from the bank’s point of view that doesn’t seem unreasonable.
Of course, we’ll see if anybody is actually engaging with this idea in good faith when it all gets rolled out. Because the bank has full end-to-end control over the device, authentication will be fully their responsibility and the (basically bullshit in the first place) excuse of “your identity was stolen,” will become not-a-thing.
Obviously I would not pay for such a device (and will always have a general purpose computer that runs my own software), but if the bank or Netflix want to send me a locked down terminal to act as a portal to their services, I guess I would be fine with using it to access (just) their services.
I suggested this as a possible solution in another HN thread a while back, but along the lines of "If a bank wants me to have a secure, locked down terminal to do business with them, then they should be the ones forking it over, not commanding control of my owned personal device."
It would quickly get out of hand if every online service started to do the same though. But, if remote device attestation continues to be pushed and we continue to have less and less control and ownership over our devices, I definitely see a world where I now carry two phones. One running something like GrapheneOS, connected to my own self-hosted services, and a separate "approved" phone to interact with public and essential services as they require crap like play integrity, etc.
But at the end of the day, I still fail see why this is even a need. Governments, banks, other entities have been providing services over the web for decades at this point with little issue. Why are we catering to tech illiteracy (by restricting ownership) instead of promoting tech education and encouraging people to both learn, and importantly, take responsibility for their own actions and the consequences of those actions.
"Someone fell for a scam and drained their bank account" isn't a valid reason to start locking down everyone's devices.
I remember my parents doing online banking authenticating with smart cards. Over 20 years ago. Today the same bank requires an iOS or Play Integrity device (for individuals at least. Their gated business banking are separate services and idk what they offer there).
> I suggested this as a possible solution in another HN thread a while back, but along the lines of "If a bank wants me to have a secure, locked down terminal to do business with them, then they should be the ones forking it over, not commanding control of my owned personal device."
Most banks already do that. The secure, locked down terminals are called ATMs and they are generally placed at assorted convenient locations in most cities.
Yeah, to some extent I just wanted to think about where the boundary ought to be. I somewhat suspect the bank or Netflix won’t be willing to send me a device of theirs to act as their representative in my pocket. But it is basically the only time a reasonable person should consider using such a device. Anybody paying to buy Netflix or the bank a device is basically being scammed or ripped off.
Why should I need a separate device? Doesn't a hardware security token suffice? I wouldn't even mind bringing my own but my bank doesn't accept them last I checked. (Do any of them?)
If the bank can't be bothered to either implement support for U2F or else clearly articulate why U2F isn't sufficient then they don't have a valid position. Anything else they say on the matter should be disregarded.
You shouldn't need a separate device, but we are quickly entering an era where a lot of banking (and other) apps will outright refuse to run or allow logins if it detects a rooted device, or play integrity fails.
In this way, the banks are asserting control over your device. It's beyond authentication, they are saying "If you have full control over your device, you cannot access our services."
I'll agree with you that they don't have a valid position, because I can just as easily open up a web browser on said rooted device and access just fine via the web, but how long until services move away from web interfaces in favor of apps instead to assert more control?
I have to use my phone to approve the web login to my account. My bank is working very hard to make sure that everyone uses the app for everything, including closing down offices and removing ATMs around the city.
A hardware token would not suffice. When you login with a hardware token it will generate some sort of token or cookie for further requests. This is where malware can steal that key and use it for whatever it wants. There is a benefit it knowing there is a high chance that the such a key is protected by the operating system's sandboxing technology. Without remote attestation you don't know if the sandbox is actually active or not.
On the contrary, a hardware token will suffice to thwart both phising and MitM which covers ~everything for all practical threat and liability models. What exactly is the concern here? A widespread worm that no one is yet aware of that's dumping people's bank accounts into crypto? It might make for a decent Hollywood plot but is pulling that off actually easier than attacking the bank directly?
Keep in mind that the businesses pushing this stuff still don't support U2F by and large. When I can go down in person to enroll a hardware token I might maybe consider listening to what they have to say on the subject. Maybe. (But probably not.)
Hypothetically on a fully controlled system you could prevent attacks like the sort of “hello this is Microsoft, we’ve identified a virus on your device, please download teamviewer and login to your bank account so we can clear it for you” type spam calls.
Or, hasn’t there been malware that periodically takes screenshots of the device? Or maybe that’s a Hollywood plot, I forget actually.
Keep in mind that a truly clueless user will most likely be running in a stock configuration. So long as that doesn't permit apps to tamper with one another (as is currently the case) there should be no issue. Google could even provide a toggle to officially root the phone and so long as flipping it wiped the device the problem would remain 99.9% solved because a scammer would be unable to pull the job off in one go.
By the time you reach the point that the user is doggedly following harmful step by step instructions over the course of multiple callbacks there is nothing short of a padded cell that can protect him from himself.
Unless you mean to suggest somehow screening such calls? A local LLM? Literal wiretapping via realtime upload to the cloud? If facing such a route society would likely be better off institutionalizing anyone victimized in such a manner.
It's unfortunate because it's actually incredibly useful functionality. If only they hadn't packaged and marketed it in quite the way they did. If there was ever a feature that needed to be guaranteed local only, zero third party integration, zero first party analytics, encryption tied to a TPM that was it.
Could you please spell out the specifics of this scenario?
MitM via an evil (ie incorrect) domain name is prevented because U2F (and now webauthn or CTAP2) are origin bound.
RATs? On stock android? How does that work? And how are the things you describe not also threats for online banking via a browser? It's certainly not how the vast majority of attacks take place in the wild. Can you provide any examples of such an attack (ie malware as opposed to phishing) that was widespread? Otherwise I assume we're writing a script for Hollywood here.
Even then, a RAT could be trivially defeated by requiring a second one-off token authentication for any transaction that would move money around. I doubt there'd be much objection to such a policy. If people really hate it let them opt out below an amount of their choosing by signing a liability waiver.
This is assuming the user's device is not compromised.
>How does that work?
Priviledge escalation on an old OS version allows an attacker to get root access. Then with that they can bypass any sandboxing. Or they could get access to some android permission intended for system apps that they should not have access to and use that to do malicous things.
I don't closely follow malware outbreaks for android so I can't point to specific examples, but malware does exist.
So the attacker compromises the user's device ... and then sets up a MitM? This is making about as much sense as the typical Hollywood plot that involves computers so I guess that means we're on track.
> Priviledge escalation on an old OS version allows an attacker to get root access.
At which point hardware attestation accomplishes nothing. Running in an enclave might but attesting the OS image that was used to boot most certainly won't.
Many consumers use older devices. Any banking app is forced to support them or they will lose customers. There's no way around that. (It doesn't matter anyway because these sorts of attacks simply aren't commonplace.)
> but malware does exist.
I didn't ask for an example of malware. I asked you to point to an example of a widespread attack against secured accounts using malware as a vector. You have invented some utterly unrealistic scenario that simply isn't a concern in the real world for a consumer banking interaction.
You're describing the sort of high effort targeted attack utilizing one or more zero days that a high level government official might be subject to.
>At which point hardware attestation accomplishes nothing
Attestation could be used to say that the user is not using a secure version of the OS That has known vulnerabilities patched.
>Any banking app is forced to support them or they will lose customers.
Remote attestation is just one of the many signals used for detecting fraud.
>one or more zero days
Many phones are not on an OS getting security updates. Whether that be due to age or the vendor not distributing the security patches. Even using old exploits malware can work.
Citation needed. The fact that the infosec industry just keeps growing YoY kinda suggests that there are in fact issues that are more expensive than paying the security companies.
> if the bank or Netflix want to send me a locked down terminal to act as a portal to their services, I guess I would be fine with using it to access (just) their services
They would only do it to assert more control over you and in Netflix's case, force more ads on you.
It is why I never use any company's apps.
If they make it a requirement, I will just close my account.
This entire shit storm is 100% driven by the music, film, and tv industries, who are desperate to eke a few more millions in profit from the latest Marvel snoozefest (or whatever), and who tried to argue with a straight face that they were owed more than triple the entire global GDP [0].
These people are the enemy. They do not care about about computing freedom. They don't care about you or I at all. They only care about increasing profits via and they're using the threat of locking people out of Netflix via HDCP and TPM, in order to force remote attestation on everyone.
I don't know what the average age on HN is, but I came up in the 90s when "fuck corporations" and "information wants to be free" still formed a large part of the zeitgeist, and it's absolutely infuriating to see people like TFfounders actively building things that will measurably make things worse for everyone except the C-suite class. So much for "hacker spirit".
Also worth remembering that around 2010, the music and film industry associations of America were claiming entitlement to $50 billion dollars annually in piracy-related losses beyond what could be accounted for in direct lost revenue (which _might_ have been as much as 10 billion, or 1/6th of their claim):
reply