From the very first announcement of this, Google has hinted that they were doing this under pressure from the governments in a few countries. (I don't remember the URL of the first announcement, but https://android-developers.googleblog.com/2025/08/elevating-... is from 2025-August-25 and mentions “These requirements go into effect in Brazil, Indonesia, Singapore, and Thailand”.) The “Why verification is important” section of this blog post goes into a bit more detail (see also the We are designing this flow specifically to resist coercion, ensuring that users aren't tricked into bypassing these safety checks while under pressure from a scammer), but ultimately the point is:
there cannot exist an easy way for a typical non-technical user to install “unverified apps” (whatever that means), because the governments of countries where such scams are widespread will hold Google responsible.
Meanwhile this very fact seems fundamentally unacceptable to many, so there will be no end to this discourse IMO.
I don't buy this argument at all that this specific implementation is under pressure from the government - if the problem is indeed malware getting access to personal data, then the very obvious solution is to ensure that such personal data is not accessible by apps in the first place! Why should apps have access to a user's SMS / RCS? (Yeah, I know it makes onboarding / verification easy and all, if an app can access your OTP. But that's a minor convenience that can be sacrificed if it's also being used for scams by malware apps).
But that kind of privacy based security model is anathema to Google because its whole business model is based on violating its users' privacy. And that's why they have come with such convoluted implementation that further give them control over a user's device. Obviously some government's too may favour such an approach as they too can then use Google or Apple to exert control over their citizens (through censorship or denial of services).
Note also that while they are not completely removing sideloading (for now) they are introducing further restrictions on it, including gate-keeping by them. This is just the "boil the frog slowly" approach. Once this is normalised, they will make a move to prevent sideloading completely, again, in the future.
> Why should apps have access to a user's SMS / RCS?
It could be an alternative SMS app like TextSecure. One of the best features of Android is that even built-in default applications like the keyboard, browser, launcher, etc can be replaced by alternative implementations.
It could also be a SMS backup application (which can also be used to transfer the whole SMS history to a new phone).
Or it could be something like KDE Connect making SMS notifications show up on the user's computer.
> One of the best features of Android is that even built-in default applications like the keyboard, browser, launcher, etc can be replaced by alternative implementations.
When sideloading is barred all that can easily change. If you are forced to install everything from the Google Play Store, Google can easily bar such things, again in the name of "security" - alternate keyboards can steal your password, alternate browsers can have adware / malware, alternate launcher can do many naughty things etc. etc.
And note that if indeed giving apps access to SMS / RCS data is really such a desirable feature, Google could have introduced gate-keeping on that to make it more secure, rather than gate-keeping sideloading. For example, their current proposal says that they will allow sideloading with special Google Accounts. Instead of that, why not make it so that an app can access SMS / RCS only when that option is allowed when you have a special Google Account?
The point is that they want to avoid adding any barriers where a user's private data can't be easily accessed.
> Instead of that, why not make it so that an app can access SMS / RCS only when that option is allowed when you have a special Google Account?
Because then you still need a special Google Account to install your app when it needs to access SMS / RCS.
How about solving this problem in a way that doesn't involve Google rather than the owner of the device making decisions about what they can do with it? Like don't let the app request certain permissions by default, instead require the user to manually go into settings to turn them on, but if they do then it's still possible. Meanwhile apps that are installed from an app store can request that permission when the store allows it, so then users have an easy way to install apps like that, but in that case the app has been approved by Google or F-Droid etc. And the "be an app store" permission works the same way, so you have to do it once when you install F-Droid but then it can set those permissions the same as Google Play.
It's not Google's job to say no for you. It's only their job to make sure you know what you're saying yes to when you make the decision yourself.
>instead require the user to manually go into settings to turn them on, but if they do then it's still possible
They clearly addressed this option in the post, under sufficient social engineering pressure these settings will easily be circumvented. You'd need at least a 24h timeout or similar to mitigate the social pressure.
> They clearly addressed this option in the post, under sufficient social engineering pressure these settings will easily be circumvented. You'd need at least a 24h timeout or similar to mitigate the social pressure.
"Under sufficient social engineering pressure" is the thing that proves too much. A 24h timeout can't withstand that either. Nor can the ability for the user to use their phone to send money, or access their car or home, or read their private documents, or post to their social media account. What if someone convinces them to do any of those things? The only way to stop it is for the phone to never let them do it.
By the time you're done the phone is a brick that can't do anything useful. At some point you have to admit that adults are responsible for the choices they make.
>By the time you're done the phone is a brick that can't do anything useful. At some point you have to admit that adults are responsible for the choices they make.
Absolutely this! It's just nanny state all over again.
This is somehow even worse. It's strictly enforced with no regard for context, you don't have the constitutional rights you have against the government and you can't vote them out.
Markets are supposed to be better because you can switch to a competitor but that only applies when there is actually competition. Two companies both doing the same thing is not a competitive market.
It'd just devolve into security whack a mole about what permissions need those special account or not, ending with basically all of them making it the same as just needing dev verification anyway for anything remotely useful.
And despite that, you assuming that dev verification means no malware. The Play Store requires developers to register with the same verification measures we're talkingand malware is hardly unheard of there.
> alternate keyboards can steal your password, alternate browsers can have adware / malware, alternate launcher can do many naughty things etc. etc.
It's plausible that Google is done some of these things, like doing some sort of data mining on everything that you type for example (steal your password), and many official google apps have ads if you don't pay them
Definitely. All mobile keyboards become keyloggers if you enable the spellcheck feature or autocomplete / suggestion feature or any AI feature on it (because they need to collect data to "improve service"). Apple also has made changes to its mobile OS when it helps data collection. E.g Allowing messenger apps like WhatsApp to integrate with the Phone app ensures that Apple now knows who you call (voice / video) on WhatsApp.
Last year Australians reported losing AU$20 million to phishing attacks, and AU$318 million to scams of all types.
It stands to reason that financial service industry peak bodies are in conversation with governments and digital service providers, including data providers, to try to better protect users.
There are obvious conflicting goals, and the banks / governments can’t really appear to be doing nothing.
And technical users are probably most certainly lacking a representative at the table, and are the group that has the least at stake. Whacko fringe software-freedom extremists, they probably call us.
Yeah. I mean the irony is that the one advantage of having a controlled and monitored app store would be that the entity monitoring it enforces certain standards. Games don't need access to your contacts, ever. If Google Play would just straight up block games that requested unnecessary permissions, it might have value. Instead we have 10,000 match-three games that want to use your camera and read all your data and Google is just fine with that. If the issue was access to personal data, a large proportion of existing apps should just be banned.
I really think all permissions systems need what we had back in xposed/appops days:
Permissions should ~always be "accept (with optional filters)", "deny", and "lie". If the game wants contacts access and won't take no for an answer, I should be able to feed it a lie: empty and/or fake and/or sandboxed data. It's my phone and my data, not the app's.
We had it over a decade ago, xposed supported filtered and fake data for many permissions. It's strictly user-hostile that Android itself doesn't have this capability.
so no, it's not necessary at all. and many apps identify OTPs and give you an easy "copy to clipboard" button in the notification.
but that isn't all super widely known and expected (partly because not all apps or messages follow it), so it's not something you can rely on users denying access to.
Installing apps from sources that are not the Play Store requires a bit of technical knowledge anyway. My grandma is not going to download a random APK and give all the necessary permissions to install it and run it.
no, that is not done via developer mode. When You download or try to open an apk from any app, it asks you if you want to allow it to install apps and send you to the configuration dialog. You still have to validate the app installation manually tbrough another dialog. In that case I usually leave the config dialog open while the app is installed, then disable the app permission right after install because that option is usually not easy to find. I usually only do it once on a new smartphone to install f-droid from a browser then allow f-droid and aurora store permanently.
I think that is the part that should be fixed, users should be able to allow a one time exception to avoid letting that permission activated by mistake. I don't need to allow permanently a web browser to install apps.
> Note also that while they are not completely removing sideloading (for now) they are introducing further restrictions on it, including gate-keeping by them.
This blog post is specifically saying there will be a way to bypass the gatekeeping on Google-blessed Android builds, just as we wanted.
> But that kind of privacy based security model is anathema to Google because its whole business model is based on violating its users' privacy.
Despite this, they sell some of the most privacy-capable phones available, with the Pixels having unlockable bootloaders. Even without unlocking the bootloader to install something like GrapheneOS, they support better privacy than the other mass market mobile phones by Samsung and Apple, which both admittedly set a low bar.
If they are concerned about malware then one of the obvious solutions would be safe guarding their play store. There is significant less scam on iphone because apple polices their app store. Meanwhile scam apps that i reported are still up on google play store.
> if the problem is indeed malware getting access to personal data, then the very obvious solution is to ensure that such personal data is not accessible by apps
Then you'd have the other "screaming minority" on HN show up, the "antitrust all the things" folks.
Your first link shows a graph that indicates more than 50% of Americans believe there is at least some competition, or a lot of competition; and that less than 1/3rd believe there is not enough, or no, competition in every sector of the economy that would be relevant to this discussion.
And that most Americans believe that bigger companies tend to have lower prices than smaller ones.
It’s not particularly clear then that there should be a lot of motivation to change things.
You're choosing the questions that have framing issues:
> more than 50% of Americans believe there is at least some competition, or a lot of competition in every sector of the economy that would be relevant to this discussion.
We're talking about Google and Apple but the relevant category would be "technology companies". Do phone platforms or mobile app distribution stores have "a lot of competition"? It's hard to see how anybody could think that. Do games and AI and web hosting? Sure they do. But they're lumping them all together.
They're also using "some competition" as the second-to-highest amount of competition even though that term could reasonably apply to a market where one company has 90% market share but not 100%, and it's confusingly similar to "not much competition". And they're somehow showing oil and gas as having less competition than telecommunications when oil and gas is a textbook fungible commodity and telecommunications is Comcast. That question has issues.
> And that most Americans believe that bigger companies tend to have lower prices than smaller ones.
This is the thing where Walmart has lower prices than the mom and pop. That doesn't imply that Walmart has better quality or service than a smaller company, and it doesn't imply that Walmart is operating in a consolidated market. Retail is objectively competitive in most areas.
Whereas when a big company is in a consolidated market, "big companies tend to have lower prices" doesn't hold and you get Google and Apple extracting 30%.
Moreover, the relevant part of that link was this part: More than two thirds of people, including the majority of both parties, support antitrust laws, six times as many people think they're not strict enough than think they're too strict and significantly more people agree with "the government should break up big tech" than disagree.
> On the other hand, maybe if the railways weren’t broken up the USA might have been crisscrossed with high speed rail by now.
Eh. The rails themselves are a natural monopoly in the same way roads are. It's one of the few things it makes sense to have the government build, or at least contract to have someone build, and then provide to everyone without restriction.
Meanwhile train cars and freight hauling and passenger service aren't any more of a natural monopoly than taxis or trucks. They get monopolized if someone is allowed to leverage a monopoly over the tracks into a monopoly over the rest of it, but that's unnecessary and undesirable. Separating them out allows the market that can be competitive to be competitive. Which is the same reason you don't want a tech monopoly leveraging it into control over ancillary markets that could otherwise be competitive.
There are two main reasons train service in the US is a shambles. The first is that the population density is too low, especially in the west. How many people do you expect to be riding a train from Boise to Des Moines on a regular basis? And the second is that truck drivers don't like freight rail, car companies don't like passenger rail and oil companies don't like either one, and they all lobby against anything that would make it better in the parts of the country where it could actually work. It's hard to make something good when there are millions of voters and billions of dollars trying to get it to suck.
>Why should apps have access to a user's SMS / RCS?
can you imagine the outrage from all the exact same people who are currently outraged about develeloper verification if google said they were cutting off any third-party app access to SMS/RCS?
Google have their own reasons too. They would love to kill off YouTube ReVanced and other haxx0red clients that give features for free which Google would rather sell you on subscription.
Just look at everything they've done to break yt-dlp over and over again. In fact their newest countermeasure is a frontpage story right beside this one: https://news.ycombinator.com/item?id=45898407
I can easily believe that Google's YouTube team would love to kill off such apps, if they can make a significant (say ≥1%) impact on revenue. (After all, being able to make money from views is an actual part of the YouTube product features that they promise to “creators”, which would be undermined if they made it too easy to circumvent.)
But having seen how things work at large companies including Google, I find it less likely for Google's Android team to be allocating resources or making major policy decisions by considering the YouTube team. :-) (Of course if Android happened to make a change that negatively affected YouTube revenue, things may get escalated and the change may get rolled back as in the infamous Chrome-vs-Ads case, but those situations are very rare.) Taking their explanation at face value (their anti-malware team couldn't keep up: bad actors can spin up new harmful apps instantly. It becomes an endless game of whack-a-mole. Verification changes the math by forcing them to use a real identity) seems justified in this case.
My point though was that whatever the ultimate stable equilibrium becomes, it will be one in which the set of apps that the average person can easily install is limited in some way — I think Google's proposed solution here (hobbyists can make apps having not many users, and “experienced users” can opt out of the security measures) is actually a “least bad” compromise, but still not a happy outcome for those who would like a world where anyone can write apps that anyone can install.
I would like a world where I have the final say over whether I should have a final say.
One way to achieve this is to only allow sideloading in "developer mode", which could only be activated from the setup / onboarding screen. That way, power users who know they'll want to sideload could still sideload. The rest could enjoy the benefits of an ecosystem where somebody more competent than their 80-year-old nontechnical self can worry about cybersecurity.
Another way to do this would be to enforce a 48-hour cooldown on enabling sideloading, perhaps waived if enabled within 48 hrs of device setup. This would be enough time for most people to literally "cool off" and realize they're being scammed, while not much of an obstacle for power users.
You can sideload, I mean INSTALL, software on any linux desktop. Yet there are still tons of people saying that desktop linux has gotten good enough for most of everyone's grandma to daily-drive.
When everyone's Grandma is running Linux then the Indian scammers will know how to trick Grandma into thinking dmesg spam is "a virus" and just install this totally-not-malware, just like they do with the windows event viewer.
In other words, it's not any quality of Linux other than how niche it is.
It's an excellent example of the fruitlessness of technical solutions to people problems. Some people are just destined to get scammed, and it isn't worth throwing away General Purpose Computing to try to help them. Be present in Grandma's life and she won't be desperate to trust the nice man on the phone just to have someone to talk to. If it weren't this it would be iTunes gift cards, or Your Vehicle's Extended Warranty, or any number of other avenues.
The actual stopping power here is that any grandma who uses a Linux desktop has a family member (or other contact) who helps with technical matters. They've been educated about internet & phone scams, and will immediately call their technical contact when anything is suspicious.
This becomes a problem when someone asks me for help with their phone and I want to point them to some apps from F-Droid to reduce their exposure to surveillance marketing.
Of course that's a side effect Google probably wouldn't be sad about.
These two solutions wouldn't work for me. My phone is covered, I use a custom ROM, but I like being able to help people install cool stuff that's not necessarily on the Play store, organically, without planning.
I'm not sure I like the idea of "you have to wait 48 hours now for sideloading in case you are an idiot". Most idiots will then have sideloading on after 48 hours and still get hit with the next scam anyway.
You’re still proving the point above, which is ignoring the fact that the restriction is specifically targeted at a small number of countries. Google is also rolling out processes for advanced users to install apps. It’s all in the linked post (which apparently isn’t being read by the people injecting their own assumptions)
Google is not rolling this out to protect against YouTube ReVanced but only in a small number of countries. That’s an illogical conclusion to draw from the facts.
"Android" is really a lot of different code but most of it is the Apache license or the GPL. Google Play has its own ToS, but why should that have to do with anything when you're not using it?
iPhone has always been that way (try installing an .ipa file that's not signed with a valid apple developer certificate). For Google forced app verification is a major change. Xbox I don't know..
> Yeah, let's ask the Debian team about installing packages from third party repos.
Debian already is sideloaded on the graciousness of Microsoft's UEFI bootloader keys. Without that key, you could not install anything else than MS Windows.
Hence you don't realize how good of an argument it is, because you even bamboozled yourself without realizing it.
It gets a worse argument if we want to discuss Qubes and other distributions that are actually focused on security, e.g. via firejail, hardened kernels or user namespaces to sandbox apps.
"Debian already is sideloaded on the graciousness of Microsoft's UEFI bootloader keys. Without that key, you could not install anything else than MS Windows."
This is only true if you use Secure boot. It is already not needed and insecure so should be turned off. Then any OS can be installed.
> This is only true if you use Secure boot. [...] so should be turned off. Then any OS can be installed.
You can only turn off Secure Boot because Microsoft allows it. In the same way Android has its CDD with rules all OEMs must follow (otherwise they won't get Google's apps), Windows has a set of hardware certification requirements (otherwise the OEM won't be able to get Windows pre-installed), and it's these certification requirements that say "it must be possible to disable Secure Boot". A future version of Windows could easily have in its hardware certification requirements "it must not be possible to disable Secure Boot", and all OEMs would be forced to follow it if they wanted Windows.
And that already happened. Some time ago, Microsoft mandated that it must not be possible to disable Secure Boot on ARM-based devices (while keeping the rule that it must be possible to disable it on x86-based devices). I think this rule was changed later, but for ARM-based Windows laptops of that era, it's AFAIK not possible to disable Secure Boot to install an alternate OS.
I agree with you and run with it disabled myself, but some anti-cheat software will block you if you do this. Battlefield 6 and Valorant both require it.
Turning off UEFI secure boot on a PC to install another "unsecure distribution"
vs.
Unlocking fastboot bootloader on Android to install another "unsecure ROM"
... is not the exact same language, which isn"t really about security but about absolute control of the device.
The parallels are astounding, given that Microsoft's signing process of binaries also meanwhile depends on WHQL and the Microsoft Store. Unsigned binaries can't be installed unless you "disable security features".
My point is that it has absolutely nothing to do with actual security improvements.
Google could've invested that money instead into building an EDR and called it Android Defender or something. Everyone worried about security would've installed that Antivirus. And on top of it, all the fake Anti Viruses in the Google Play Store (that haven't been removed by Google btw) would have no scamming business model anymore either.
"... is not the exact same language, which isn"t really about security but about absolute control of the device.
The parallels are astounding, given that Microsoft's signing process of binaries also meanwhile depends on WHQL and the Microsoft Store. Unsigned binaries can't be installed unless you "disable security features".
My point is that it has absolutely nothing to do with actual security improvements."
While it's possible to install and use Windows 11 without Secure Boot enabled, it is not a supported configuration by Microsoft and doesn't meet the minimum system requirements. Thus it could negatively affect the ability to get updates and support.
> It is already not needed and insecure so should be turned off.
The name “Secure Boot” is such an effective way for them to guide well-meaning but naïve people's thought process to their desired outcome. Microsoft's idea of Security is security from me, not security for me. They use this overloaded language because it's so hard to argue against. It's a thought-terminating cliché.
Oh, you don't use <thing literally named ‘Secure [Verb]’>?? You must not care about being secure, huh???
Dear Microsoft: fuck off; I refuse to seek your permission-via-signing-key to run my own software on my own computer.
Also Secure boot is vulnerable to many types of exploits. Having it enabled can be a danger in its self as it can be used to infect the OS that relies on it.
I do not want to be in the business of key management. This is not something that needed encryption. More encryption ≠ better than.
I also dual-boot Windows and that's a whole additional can of worms; not sure it would even be possible to self-key that. Microsoft's documentation explicitly mentions OEMs and ODMs and not individual end users: https://learn.microsoft.com/en-us/windows-hardware/manufactu...
Name a single prevented bootkit that wasn't able to avoid the encryption and signature verification toolchain altogether.
Malware developers know how to avoid this facade of an unlocked door.
Users do not.
That's the problem. It's not about development, it's about user experience. Most users are afraid to open any Terminal window, let alone aren't even capable of typing a command in there.
If you argue about good intent from Microsoft here, think again. It's been 12 years since Stuxnet, and the malware samples still work today. Ask yourself why, if the reason isn't utter incompetence on Microsoft's part. It was never about securing the boot process, otherwise this would've been fixed within a day back in 2013.
Pretty much all other bootkits also still work btw, it's not a singled out example. It's the norm of MS not giving a damn about it.
The countries that go after Google are the first wave, they're applying these restrictions globally not much later.
The linked post is full of fluff and low on detail. Google doesn't seem to have the details themselves; they're continuing with the rollout while still designing the flow that will let experienced users install apps like normal.
yt-dlp's days are fairly numbered as Google has a trump card they can eventually deploy: all content is gated behind DRM. IIRC the only reason YouTube content is not yet served exclusively through DRM is to maintain compatibility with older hardware like smart TVs.
Youtube already employs DRM on some of their videos (notably their free* commercial movies). if you try to take a screenshot, the frame is blacked out. this can be bypassed by applying a CSS blur effect of 0 pixels, permitting extraction; detection of DRM protection and applying the bypass is likely trivial for the kinds of people already writing scripts and programs utilizing yt-dlp. the css method of bypass has been widely disseminated for years (over a decade?), but programmers love puzzles, so a sequel to current DRM implementation seems justified. YT could also substantially annoy me by expiring their login cookies more frequently; I think I have to pull them from my workstation every month or two as-is? at some point, they could introduce enough fragility to my scripts where it's such a bother to maintain that I won't bother downloading/watching the 1-3 videos per day I am today -- but otoh, I've been working on a wasm/Rust mp4 demuxer and from-scratch WebGL2 renderer for video and I'm kind of attached to seeing it through (I've had project shelved for ~3 weeks after getting stuck on a video seek issue), so I might be willing to put a lot of effort into getting the videos as a point of personal pride.
the real pain in the butt in my present is Patreon because I can't be arsed to write something separate for it. as-is, I subscribe to people on Patreon and then never bother watching any of the exclusive content because it's too much work. some solutions like Ghost (providing an API for donor content access) get part of the way to a solution, but they are not themselves a video host, and I've never seen anyone use it.
> this can be bypassed by applying a CSS blur effect of 0 pixels, permitting extraction
That's not real DRM then. The real DRM is sending the content such that it flows down the protected media path (https://en.wikipedia.org/wiki/Protected_Media_Path) or equivalent. Userspace never sees decrypted plaintext content. The programmable part of the GPU never seen plaintext decrypted content. Applying some no-op blur filter would be pointless since anything doing the blur couldn't see the pixels. It's not something you can work around with clever CSS. To compromise it, you need to do an EoP into ordinarily non-programmable scanout of the GPU or find bad cryptography or a side channel that lets you get the private key that can decode the frames. Very hard.
Is this how YT works today? Not on every platform. Could it work this way? Definitely. The only thing stopping them is fear of breaking compatibility with a long tail of legacy devices.
Something I've never understood about DRM is, if the content is ultimately played on my device, what stops me from reverse engineering their code to make an alternative client or downloader? Is it just making it harder to do so? Or is there a theoretical limit to reverse engineering that I'm not getting? Do they have hardware decryption keys in every monitor, inside the LCD controller chip?
in short and simple terms, those parasites colluded with hardware manufacturers and put a special chip in your computer and monitor that runs enslavement software
without opening it up physically there is no way to make it stop or get the raw stream before it's displayed
This. Some ways back I actually purchased bluray recording device only to learn that its firmware is deliberately crippled to accommodate someone's business model. There are people who do the unsung hero work, but those types of skills are not exactly common and a business asshole is a dime a dozen any century you want to pick.
Yes, the decryption happens in hardware. For your OS (and potential capturing software running on it) the place where you see the video is just an empty canvas on which the hardware renders the decrypted image.
All levels of Widevine are cracked, but only the software-exclusive vulnerabilities are publicly available. It's only used for valuable content though (netflix/disney+/primevideo), so it might still work out for YouTube as no one will want to waste a vulnerability on a Mr. Beast slop video.
The reason they have different levels is that the DRM pitchmen got tired of everyone making fun of their ineffective snake oil, so they tried to make a version that was harder to break at the cost of not supporting most devices.
Naturally that got broken too, and even worse, broken when it's only supported by a minority of devices and content, because the more devices and content it's used for the easier it is to break and the larger the incentive to do it.
If you tried to require that for all content then it would have to be supported by all devices, including the bargain bin e-waste with derelict security, and what do you expect to happen then?
I don’t have any personal links but know that there is a constant cat-and-mouse game of cracking Widevine devices for their L1 keyboxes and using them on high-value content (as mentioned).
That’s why a lot of low end Android devices often have problems playing DRMed content on the Web: their keyboxes got cracked open and leaked wide enough for piracy that they got revoked and downgraded down to L3.
Too bad that I'm going iPhone if Google removes sideloading and now I know about revanced so they aren't getting any more than the zero dollars that youtube and youtube music are worth from me
If I'm going to live in a walled garden it's going to the fanciest
If they're going to reduce me to a user, iOS is the better choice. I had an iPhone before and it's a picture taking, instagram, social media machine with iMessage—bringing the console wars to normies since inception.
Because the hardware is so constrained an iphone lasts forever compared to a similar android. My two year old pixel is slow now, but I know people completely happy with a five year old iphone. Pause, I checked and the oldest iphone that receives updates is an iphone 11, which is the exact model I had before going back to android.
I have multiple generations of pixel phones and could not tell the difference in performance between them in basic tasks. Maybe because i installed GrapheneOS which makes both stock android and ios feel like a bloat and spyware riddled toy.
Developers of these apps would have little motivation if the maximum audience size was cut down to the very few who would use adb. The ecosystem would die.
That uses a workaround based on WiFi debugging even though it's all local. It doesn't run if you're not connected to a trusted WiFi network, you have to set it all up when connecting to a new network, etc.
Not only users are not connected to WiFi all the time, but in many developing countries people often have no WiFi at home and rely on mobile data instead. It's a solution, but not a solution for everyone or a solution that works all the time.
And how do you estimate the audience that even cares about those issues?
I think number of people caring about alternative app stores, F-droid or whatever is very similar to the number of people willing to use adb if necessary, so rather small.
But the ecosystem exists, regardless of what the absolute number is, and it would be bad to lose it. If the platform was more open like Windows the ecosystem would grow, if it was less open like iOS it would die.
> In early discussions about this initiative, we've been encouraged by the supportive initial feedback we've received.
> the Brazilian Federation of Banks (FEBRABAN) sees it as a “significant advancement in protecting users and encouraging accountability.” This support extends to governments as well
> We believe this is how an open system should work
Google isn't "hinting" that they're doing this under pressure, that announcement makes it quite clear that this is Google's initiative which the governments are supportive of because it's another step on a ratcheting mechanism that centralizes power.
> because the governments of countries where such scams are widespread will hold Google responsible
Your comment is normalizing highly problematic behavior. Can we agree that vague "pressure from the government" shouldn't be how policies and laws are enacted? They should make and enforce laws in a constitutional manner.
If you believe that it's normal for these companies and government officials to make shadow deals that bypass the rule of law, legal procedures, separation of powers and the entire constitutional system of governance that our countries have, then please drop the pretense that you stand for democracy and the rule of law (assuming that you haven't already).
Otherwise we need to be treating it for what it is - a dangerous, corrupt, undemocratic shift in our system of governance.
> there cannot exist an easy way for a typical non-technical user to install “unverified apps” (whatever that means), because the governments of countries where such scams are widespread will hold Google responsible.
What, the same way they hold Microsoft responsible for the fact that you can install whatever you want in Windows?
Obviously, there can exist an easy way for a non-technical user to install unverified apps, because there has always been one.
This is actually a good point, and something I've been wondering about too. What changed between the 90s and now, that Microsoft didn't get blamed for malware on Windows, but Google/Apple would be blamed now for malware on their devices? It seems that the environment today is different, in the sense that if (widespread) PCs only came into existence now, the PC makers would be considered responsible for harms therefrom (this is a subjective opinion of course).
Assuming this is true (ignore if you disagree), why is that? Is it that PCs never became as widespread as phones (used by lots of people who are likely targets for scammers and losing their life savings etc), or technology was still new and lawmakers didn't concern themselves with it, or PCs (despite the name) were still to a large extent "office" devices, or the sophistication of scammers was lower then, or…? Even today PCs are being affected by ransomware (for example) but Microsoft doesn't get held responsible, so why are phones different?
What changed is that Apple made the masses familiar with the concept of installing software only from a store with a vetting process. For short, the walled garden. That was mostly an alien thing in the world of software. All of us grew with the possibility of getting an installer and install it whenever we wanted. There were some form of protections against piracy but nothing else.
Once Apple created the walled garden every other company realized how good it could be for their bottom lines and attempted to do the same thing.
So, to answer your question, Microsoft got blamed for viruses and made fun of but there wasn't a better way in the mainstream. There is one now.
PCs will resist this trend for a while because it's also mainstream that they are used to do work. Many people use a PC every day with some native application from a company they have a direct contract with. For example: accounting software. Everybody can add another example from their own experience. Those programs don't come from the Windows store and it will be a long term effort to gatekeep everything into the store or move them into a web browser.
The .NET MAUI technology we had a post about yesterday is one of the bricks that can build the transition.
> So, to answer your question, Microsoft got blamed for viruses and made fun of but there wasn't a better way in the mainstream. There is one now.
I don't think App Store is a better way.
From my point of view, people keep mistaking the actual progress - generalised sandboxing and reduced API surface - with the major regression - controlled distribution. At the beginning of the App Store, when the sandboxing and APIs were poor, they were frequent security issues.
Apple marketing magic is somehow convincing people that it's their questionable veting which made things secure and not the very real security innovations.
I'm with you and personally I boycott Apple because of the walled garden, for what it's worth. However it is a better way (a more convenient way?) for companies to make money and it gave an idea to legislators and regulators. Now they expect that the owner of the OS can decide what runs and what does not run on their OS and be made accountable for it.
Windows 95 (and patronage) had become a shitshow. It’s easy to forget how much time us tech types were spending “fixing” uncle’s PC that somehow got malware on it. How we touted Linux as an escape from the hellscape of crapware.
It was into this void that the “everything seems new” iPhone stepped and ventured out in a different course. I’m neither speaking for or against apples normalization of an App Store as a primary source of updates, just recalling the way things were, and positing that Apple was trying a different approach that initially offered a computing platform that wasn’t the hellscape that MS platform was quickly becoming.
Windows 95 was fundamentally broken as if I recall correctly there was much less security features (accounts, file permissions, etc.). Nowadays there are less problems with it.
Its not that it was broken, its that security was not really a thing. You had your antivirus to protect you from people adding stuff to discs, but thats it. Windows 95 was just an exe file in the windows folder that you could run from DOS.
Windows NT / OS2 did have more security as it was meant for shared environments, but even there, corporations ended up using stuff like Novell NetWare to get the actual networking services.
Windows 2000 was the first version of consumer windows based on the NT kernel instead of the DOS / Windows 95/98/ME based systems. I still remember running around the office updating windows 2000 machines to service pack 4 to protect us against the first real massive virus "ILOVEYOU".
Edit: Still on first coffee, sorry about the ramblings
Sure, my point was that even if iPhone ecosystem is more secure than Windows 95, I do not think this is due mostly to the "walled garden", but because (as you mention) Windows 95 just did not care about security at all. By the time iPhone appeared the security of Windows systems (2000 and later) had already improved (even if not perfect) and there was a possibility to configure it more "locked down", if you wanted.
I always blamed Microsoft for Windows insecurity. But seriously, Windows did not have any vetting process for apps and apps didn't really have access to money. Google's problem is that they claim Android is a secure way to do banking but it isn't.
Nah, that's the beauty of it. Liberal principles make a much more robust political foundation that post-liberal principles. The US is known for the former despite current flirtations with the latter. However, liberal principles aren't tied to any one country. Fortunately for us!
It's not a separate problem, Google are actively suppressing any possibility of open mobile hardware. They force HW manufacturers to keep their specs secret and make them choose between their ecosystem and any other, not both. There's a humongous conflict of interests and they're abusing their dominating position.
> They force HW manufacturers to keep their specs secret
Spoken like someone who has never ever worked with any hardware manufacturers. They do not need reasons for that. They all believe their mundane shit is the most secret-worthy shit ever. They have always done this. This predates google, and will outlive it.
Given how antitrust is not really working right now I would say this is debatable. Also monopolies in the past were forced to do various things to keep their status for longer.
> I bought the hardware, therefore I have the right to modify and repair. Natural right, full stop.
There is absolutely nothing "natural" about trading your pile of government promises for the right to call government men with guns and sticks if you are alienated from the option to physically control an object. Your natural right is to control what you can defend.
Rights are what we decide them to be. Or rather, what people in power decide them to be, i.e. people who hold and issue large amounts of government promises, and recruit and direct the most men with guns and sticks.
Oh, so you're good with everyone having the "natural right" to turn handguns into automatic weapons simply because they find themselves in possession of the correct atoms? How about adding a 3rd story on the top of your house without needing a permit or structural evaluation?
Note that adding "full stop" pointlessly to the end of sentences does not strengthen your argument.
I don't think it's illegal to do whatever you want with your phone. That doesn't mean google legally is required to make it easy or even possible. That being said I ethically they should allow it, and considering their near monopoly status they should be forced to keep things open. In fact there should be right to repair laws too.
I suppose you have the right to do whatever you want with it, including zapping it in the microwave or using it as a rectal probe. I am not sure that right extends are far as forcing companies to deliver a product to your specifications (open software, hardware, or otherwise)
You’re still missing the point the comment is making: In countries where governments are dead set on holding Google accountable for what users do on their phones, it doesn’t matter what you believe to be your natural right. The governments of these countries have made declarations about who is accountable and Google has no intention of leaving the door open for that accountability.
You can do whatever you want with the hardware you buy, but don’t confuse that with forcing another company to give you all of the tools to do anything you want easily.
That's deflection, there's Google blocking users from installing apps and there's OP insinuating that it might be because of governments coercion but there's no evidence to support this. Scammers pay Google to show ads to install apps, that's what the governments are holding Google responsible and it won't change with blocking installing apps.
Malicious app delivery goes beyond Google ads. In Singapore, most scam app installs are from social engineering, e.g. install new app to receive payment, install new app to buy something for cheap.
I’m amazed at how gullible some people are but that’s how it is.
> there cannot exist an easy way for a typical non-technical user to install “unverified apps” (whatever that means), because the governments of countries where such scams are widespread will hold Google responsible.
You can also view this as a "tragedy of the commons" situation. Unverified apps and sideloading is actively abused by scammers right now.
> Meanwhile this very fact seems fundamentally unacceptable to many, so there will be no end to this discourse IMO.
I get that viewpoint and I'm also very glad an opt-out now exists (and the risk that the verification would be abused is also very real), but yeah, more information what to do against scammers then would also be needed.
It's not possible to provide a path for advanced users that a stupid person can't be coerced to use.
Moreover, it's not possible to provide a path for advanced users that a stupid person won't use by accident, either.
These are what drive many instances of completely missing paths for advanced users. It's not possible to stop coercion or accidents. It is literally impossible. Any company that doesn't want to take the risk can only leave advanced users completely out of the picture. There's nothing else they can do.
Google will fail to prevent misuse of this feature, and advanced users will eventually be left in the dust completely as Google learns there's no way to safely provide for them. This is inevitable.
Android could have, for example, a 24 hour "cooling off" period for sideloading approval. Much like some bootloader unlocking - make it subject to a delay.
That immediately takes the pressure off people who are being told that their bank details are at immediate risk.
> Android could have, for example, a 24 hour "cooling off" period for sideloading approval.
And, to prevent the scammer from simply calling back once the 24 hours are gone, make it show a couple of warnings (at random times so they can't be predicted by the scammer) explaining the issue, with rejecting these warnings making the cooling off timer reset (so a new attempt to enable would need another full 24 hours).
The people gullible enough to fall for a scam like that are also gullible enough to follow more instructions 24 hours later. I think if you could force a call to the phone and have an agent or even AI that talks to user and makes sure no scam is involved then gives an unlock code based on deviceID or something. But that would cost money and scammers would work around it anyway.
>It's not possible to provide a path for advanced users that a stupid person can't be coerced to use.
I actually think you might be wrong about this? Imagine if Google forced you to solve a logic puzzle before sideloading. The puzzle could be very visual in nature, so even if a scammer asked the victim to describe the puzzle over the phone, this usually wouldn't allow the scammer to solve it on the victim's behalf. The puzzle could be presented in a special OS mode to prevent screenshots, with phone camera disabled so the puzzle can't be photographed in a mirror, and phone call functionality disabled so a scammer can't talk you through it as easily. Scammers would tell the victim to go find a friend, have the friend photograph the puzzle, and send the photo to the scammer. At which point the friend hopefully says "wait, wtf is going on here?" (Especially if the puzzle has big text at the top like "IF SOMEONE ASKS YOU TO PHOTOGRAPH THIS, THEY ARE LIKELY VICTIM OF AN ONGOING SCAM, YOU SHOULD REFUSE", and consists of multiple stages which need to be solved sequentially.)
In addition to logic puzzles, Google could also make you pass a scam awareness quiz =) You could interleave the quiz questions with logic puzzle stages, to help the friend who's photographing the puzzle figure out what's going on.
I guess this could fail for users who have two devices, e.g. a laptop plus a phone, but presumably those users tend to have a little more technical sophistication. Maybe display a QR code in the middle of the puzzle which opens up scam awareness materials if photographed?
Or, instead of a "scam awareness quiz" you could could give the user an "ongoing scam check", e.g.: "Did a stranger recently call you on the phone and tell you to navigate to this functionality?" If the user answers yes, disable sideloading for the next 48 hours and show them scam education materials.
It would also fail for users who are differently abled. That sounds like an absolute nightmare for accessibility. Good news for preventing scams, but bad news for anyone without full mental and physical faculties.
I'm not sure why you couldn't make the flow I describe just as accessible as anything else in Android? But I'll grant your premise and respond anyways.
If the user lacks full mental faculties, they are part of the userbase we need to protect from scams. Most likely, a user without full mental faculties who is trying to sideload will be a scam victim.
If the user lacks the necessary physical faculties to "solve a puzzle on their phone", they probably get help from friends regularly; a friend should be able to help with sideloading. Enabling sideloading should be a one-time operation right?
Notice though that we don't forbid people from withdrawing cash from the bank in order to prevent this.
Warning about scams is fine, as is taking steps to make it harder, but once you start trying to completely remove the agency of mentally sound adults "for their own good" then we have a problem.
It seems to me if you raise the difficulty enough, and lower the success rate enough, at some point a given scam stops being economical. https://news.ycombinator.com/item?id=45913529
It's waaaay more complicated to download ADB and side load a random APK.
This is either a move towards tighter control of the platform or a government request. And somewhat ironic, given that iOS is being pressured to be a bit more open.
> there cannot exist an easy way for a typical non-technical user to install “unverified apps” (whatever that means), because the governments of countries where such scams are widespread will hold Google responsible.
But it is perfectly fine to sell crypto and other complex financial assets to kids and other people that do not know they are from apps in the Play store.
If "safety" takes control from you then it is implemented. If real safety puts profits in danger then it is fight against. Quite a dystopia.
Then let them do that for those countries. Not for everyone. I'm not in any of those autocratic countries. Or offer an opt out in the countries where this isn't a thing. Using adb is not really great for doing updates.
And also, I'm the owner of my device. Not my country.
I'm pretty sure Brazil doesn't have a law saying that Google must forbid sideload. I'm sure that government (be it President, Central Bank etc) doesn't pressure Google about it.
I'm sure some private actors (for example, banks) would love that smartphones are as tight as possible (reason: [0]). Perhaps the same reason applies to Google [1]. But no, "Brazil" isn't demanding that from Google.
[0]: consider that some virus (insecure apps, for example) could somehow steal information from bank apps (even as simple as capture login information). The client might sue the bank and the bank might have to prove that their app is secure and the problem was in the client's smartphone.
[1]: the client, the bank etc might complain to Google that their Android is insecure
Aha - that is a much better explanation than I assumed, aka "the people forced Google to behave". So Google is scared of having to pay fines or having their CEOs end up in jail. I actually think there should be a new rule - easy-jail mode for CEOs globally. Does not have to be long but say, a few days in jail for ignoring the law, and right hold the CEOs responsible for that. You earn a lot of money, so you also gotta take the risk.
> From the very first announcement of this, Google has hinted that they were doing this under pressure from the governments in a few countries. (I don't remember the URL of the first announcement, but https://android-developers.googleblog.com/2025/08/elevating-... is from 2025-August-25 and mentions “These requirements go into effect in Brazil, Indonesia, Singapore, and Thailand”.)
In ye goode olde times, the US would have threatened invasion and that would have been the end of it.
Half /s, because it actually used to be the case that the US government exercised its massive influence (and not just militarily) onto other countries for the benefit of its corporations and/or its citizens... these days, the geopolitical influence of the US has been reduced to shreds and the executive's priorities aren't set by doing what's (being perceived as being) right but by whomever pays the biggest bribes.
Why can't they just put up a big, red warning: "Never enable software installation if someone asks you to (over the phone or via message). If you're unsure, check out this article on scams."?
> "Never enable software installation if someone asks you..."
Imagine a situation in which a frightened, stressed user sees such a message on their screen. Meanwhile, a very convincing fake police officer or bank representative is telling them over the phone that they must ignore this message due to specific dangerous emergency situation to save the money in their bank account. Would the user realize at that moment that the message is right and the person on the phone is a thief? I'm not so sure.
What if there is a 12-hour delay to unlock "power user mode", and during that entire 12-hour unlock period, the phone keeps displaying various scam education information to help even an unsophisticated user figure out what's going on? Surely Google can devote a few full-time employees to keeping such educational materials up to date, so they ideally contain detailed descriptions of the most common scams a user is going to be subject to at any given time.
This would help for sure. Ideally, the phone should stay in "expert mode" for a limited time only, like 1 hour.
However, there is still a danger that scammers will call after 12 hours, and they will be more convincing than educational material (or the user may not have read it).
> However, there is still a danger that scammers will call after 12 hours
It is unlikely it will work. Scammers are talking all the time and creating a sense of urgency, people have issues to think and listen at the same time, and they tend to drop thinking completely when in a haste. 12 hours of a break will give the victim time to think at least. Probably it will give time to talk about it with someone, or to google things.
> because the governments of countries where such scams are widespread will hold Google responsible.
How many virus infections and scams was Microsoft held responsible for? What about Red Hat, or Debian?
And at least let Google plainly state this, instead of inventing legal theories based on vague hints from their press releases, to explain why their self-serving user-hostile actions are actually legally mandatory.
> the governments of countries where such scams are widespread will hold Google responsible.
This argument is FUD at this point.
Sovereign governments have ways to make clear what they want: they pass laws, and there needs to be no back deal or veiled threats. If they intend to punish Google for the rampant scams, they'll need a legal framework for that. That's exactly how it went down with the DMA, and how other countries are dealing with Google/Apple.
Otherwise we're just fantasizing on vague rumors, exchanges that might have happened but represent nothing (some politicians telling bullshit isn't a law of the country that will lead to enforcement).
This would be another story if we're discussing exchanges with the mafia and/or private parties, but here you're explicitely mentionning governments.
That's a disingenuous argument though: they are in that position because they chose to make themselves the only way that a 'normal' user is able to install software on these devices. If not for that these governments wouldn't have a point to apply pressure on in the first place.
BTW, Stallman and FSF have been saying this the whole time - if you become the only gatekeeper, don't be surprised when government people show up and force you to ban apps or users from your platform.
This is just lies spread by the very own people that created this system in the first place, if PCs can have apps without "verification" then so can a phone.
Imagine if they tried to hold the entire world to the standards of Russia, China or North Korea. Yet they don't. This is just an excuse from them, or else they would only enable it in those countries. They don't hold the entire world to Chinese standards so why should they hold them to Brazilian standards? The only reasonable answer is: they also like those standards.
No, then the results of many google web searches would not put scam sites at the top over the official sites. Google is fine with people being scammed. As long as they get their cut. Large corporations don't have empathy.
Meta ads too. It’s bonkers the type of ads they approve, straight up scams or obvious misinformation (some prominent figure is in jail! Click here to find out!)
From what I've seen, millions lost to scams are with social engineering; through cold calls masquerading as the authorities, phishing, pig butchering; plenty of scam apps on the Play store harvesting data as well, but not a single real life instance of malware installed outside the officially sanctioned platform.
> because the governments of countries where such scams are widespread will hold Google responsible.
This is the unsurprising consequence of trying to hold big companies accountable for the things people do with their devices: The only reasonable response is to reduce freedoms with those devices, or pull out of those countries entirely.
This happened a lot in the early days of the GDPR regulations when the exact laws were unclear and many companies realized it was safer to block those countries entirely. Despite this playing out over and over again, there are still constant calls on HN to hold companies accountable for user-submitted content, require ID verification, and so on.
Yes. The same goes with payment processing. I hate visa/mastercard as much as the next person. But if the court says they're accountable for people who buy drug/firearm/child porn, then it seems to be a quite reasonable reaction for them to preemptively limit what the users can buy or sell.
The government(s) have to treat the middlemen as middlemen. Otherwise they are forced to act as gatekeepers.
These two things are not the same. The GDPR afforded rights to common people. Those companies that would pull out are the ones that were abusing data that was never theirs and could no longer do so.
Nah. I know of several startups that had nothing but anonymous telemetry and they blocked all Europe because there was no capacity for compliance. I was at an incubator at the time and the decision was unanimous across a dozen or so companies. It’s not like anyone was going to lose out on VC money from that market
And it's a bit hard to believe that these several startups functioned without ever collecting names, emails, IP, phone number, or address of any lead or customer ever.
Maybe they did? Who knows? Never gonna find out because no one had time to look into it. It certainly wasn’t done with malicious intent, perhaps by accident or oversight, which is likely the situation in most small companies.
If nobody pushed back on anything we'd all be subjected to the laws of the worst country on earth, because big tech companies want to do business there, and putting an if/else around the user's country takes effort.
there cannot exist an easy way for a typical non-technical user to install “unverified apps” (whatever that means), because the governments of countries where such scams are widespread will hold Google responsible.
Meanwhile this very fact seems fundamentally unacceptable to many, so there will be no end to this discourse IMO.