There's also another strong argument against favoring tests over types, which is maintainability. If I go ahead and change a variable type from being non-nullable to nullable, I instantly get a complete list of all the places where I have to handle that, which makes it much much faster to generalize the logic. But in a dynamic language, all tests that were written by the previous developer was written at a time when this variable was assumed to never be null and uses non-null values for all test vectors, so good luck using that to find the places that need to be fixed.
On top of that, every test that could have been omitted due to a type system incurs an extra maintenance tax that you have to pay when you change the API.
I disagree with the article, but also some of these examples are complete straw-men. In Kotlin you have nullable types, and the type checker will complain if you use it as a non-nullable type. But you can always just append !! after your expression and get exactly the same behavior as in Java and get a null pointer exception, you don't have to handle it gracefully as the author is suggesting. Tests checking that you gracefully handle nulls in a language without null types are fucking tedious and boring to write. I would take a language with null types over having to write such tests any day.
Kotlin's final-by-default is also just that - a default. In Java you can just declare your classes `final` to get the same behavior, and if you don't like final classes then go ahead and declare all of then open.
I also disagree with the author's claim that languages with many features requires you to be a "language lawyer", and that more simplistic languages are therefore better. It's of course a balance, and there are examples of languages like C++ and Haskell where the number of features become a clear distraction, but for simpler things like null types and final-by-default, the language is just helping you enforce the conventions that you would anyway need when working with a large code base. In dynamically typed languages you just have to be a "convention lawyer" instead, and you get no tool support.
Your last sentence makes a very good point. And Uncle Bob's tool is to have loads of tests.
I suppose it's all just a balance: simplicity versus expressiveness, foot guns versus inflexibility, conciseness versus ceremony, dev velocity versus performance in production.
I'm okay with shifting some of the burden of common errors from developer into the language if that improves reliability or maintainability. But Martin has a point in that no guard rails can ever prevent all bugs and it can be annoying if languages force lots of new ceremony that seems meaningless.
This is another example of either stupid or malicious politicians thinking that it is possible to implement mandatory scanning on devices owned and operated by people and somehow get a meaningful true match rate without false positives. Of course this is not possible! But the negative consequences are immense.
It is exactly the same kind of stupid thinking driving ideas such as Chat Control in the EU. In the end, no child will be safer, but we will end up having a world where no-one has the right to control what software can run on their own hardware devices and where no-one has legal access to end-to-end encrypted communication.
This is not a lot compared to how much data the average website will download. I just clicked a wired.com link which is also on the frontpage of HN right now. In a Chromium browser with no adblock, the network tab says 12 MB transferred.
The difference is just that this web page shows you how much data it transfers instead of doing it in the background.
With freedom also comes responsibility, and some innocent people will inevitably shoot themselves in the foot. This is not a strong enough argument for putting everybody else in a cage and letting a duopoly take over virtually all of the distribution of consumer software.
It might be a strong argument depending on the negative effects - I don't think it's very clear cut. Also no, neither Apple nor Google have a duopoly on the distribution of all consumer software. Microsoft exists, for example.
The other problem consideration here is negotiating power.
Today consumers don't have negotiating power over individual developers, but both Apple and Google do. If you complain to Meta about their unwanted tracking, you don't really have many options besides deleting the app (which you should do anyway). But if enough people complain to Apple or Google, they are more inclined to do something and have the power.
While it may be a marriage of convenience, it's undeniable that both companies through their app distribution models have also provided benefits to consumers that developers otherwise would have abused - privacy, screen recording, malicious advertising, &c.
If you want to argue from the standpoint of pro-consumer action, you have to remember that "developers" are usually pretty awful too and will get away with anything they can, even if it harms their customers. A good balance, instead of ideological purity about one "side" or the other is the smarter move. I tend to come down on the side of the mainstream app stores precisely because those asking for more "freedom" to do as they wish are a tiny minority and are usually more technical. I.e. they can jump through the hoops to install 3rd party app stores and jailbreak their phones today, and since you already can do what you want, maybe it's best to just leave the masses alone since they're very obviously happy with the duopoly.
I run GrapheneOS, but I can't use the national digital identity app because it requires Google Play Integrity. I very much cannot do what I want without it having severe consequences because the duopoly is starting to shape the basic digital infrastructure, and critical services start requiring that I use one of the two ecosystems.
I think the principle of digital autonomy should be front and center. Surely we can figure out security models that don't imply that two American tech companies get to call the shots on what people can or cannot do on hardware that they supposedly own.
Yes, but the regulation is wrong, since it is based on an irrational security analysis and cover-my-ass politics which belong in private companies and not in government institutions which are supposed to protect the freedoms of the citizens.
The technical security requirements should not be hard to define, i.e. the platform on which the solution runs should require all keys to be device-bound with a certificate chain from the hardware manufacturer proving this to the issuers during enrollment. The operating system should also be able to verify to the issuer that the hash of the app is recognized as an official app.
However, the strongest integrity level of solutions like Play Integrity - which is the only level that GrapheneOS cannot pass, and which only my national identity app requires - is protecting against very theoretical attacks which I don't believe actually happen in the real world, since it not only protects against fake malicious identity apps, but also against the scenario where a scammer has convinced their target of installing a custom Android operating system which fakes the app integrity verification. This attack requires a victim with a technical aptitude that allows them to unlock the bootloader and use adb, but which is at the same time gullible enough to believe the attacker. It also requires that the attacker builds a malicious Android release for the exact hardware of the victim. Seriously, if the victim is this easy to manipulate and also this resourceful, then the attacker should just convince them to disable biometrics and send the phone to the attacker by mail.
It is very very clever of Google to disguise what is essentially voluntary vendor lock-in as a security feature.
Apple and Google each respectively have a monopoly in their markets. Only apple approved apps may be installed on an iPhone.
Digital goods DO NOT work like physical goods. I can just buy another washing machine. I CANNOT just choose to opt out of using a smartphone. My choices are apple or Google, and even within those choices it's limited by network effects.
Well, you have to balance it with how much you want to line the coffers of malicious actors.
If you go all the way to "everyone should have the freedom to get pwned", then you are also funneling the money of innocents into the pockets of some of the worst people in the world, and that's not a great outcome just to make life more convient for some HNers.
The question is about what trade-off makes sense for most people. That probably is some sort of escape hatch nontech people just won't do.
Maybe it's a hard thing to appreciate until you've watched aging family members get tricked by absolute scum, mostly enabled by how loosey-goosey modern computing can still be.
The thing is, apple decides this for themselves, on a product that you fully bought and privately own. It bundles the most brilliant and most incompetent clueless people into 1 group and goes for lowest denominator. No freedom of choice.
Of course thats PR argument, in reality its about distorting and manipulating the market to get the most money out of its users and bind them to their ecosystem as hard as possible to extract even more. And the amount of those same people who uncritically defend them here is still staggering. But maybe its just employees ignoring their ndas, some investors and similar folks.
You can make the default locked down and still allow other operating systems, and most alternative operating systems will also come with a suitably strong security model that does not easily allow the user to fuck themselves. I don't think not locking everything down completely will inevitably lead to every elderly person becoming a victim of scam, so I don't acknowledge that argument.
The problem is that Apple owns the platform and half of the mobile ecosystem. You can't just launch a competitive marketplace which could compete alongside Apple's app store, nor can you launch an alternative operating system. You have to launch a whole new smartphone stack complete with operating system, app distribution and app ecosystem.
I also don't use Apple, and I try to avoid the only other alternative by using GrapheneOS instead of an official Android build.
But at some point everything is going to be so closely tied to Google as well that that way of living is also going to become too painful, and at that point "or not use Apple or Google" is a bit like saying "or not use the roads".
This. Doing business with almost any major company is unethical, but Apple sits near the top of the big tech companies people shouldn't do business with. They are not a force for good in the world.
I am with you, and for this reason I really want them to fail. The PC is currently still a platform where the user has a relatively large amount of control and digital autonomy, and as long as a sizeable part of the population keeps using it, companies and government institutions cannot ignore it and must support it.
Once 90% of all internet clients are iOS or Android the open internet is dead, and the concept of a general purpose computer on which you can run any computation you want is also inaccessible to the average person. From that point on, everything is a service that you rent from either Apple or Google.
I agree with the parent comment, but I understand that someone who did not grow up with UIs that looked like that would think otherwise. I do feel that Windows 2000 was peak UI for desktop operating systems, but it's probably due to a combination of nostalgia and the fact that I deeply dislike modern Electron-based UIs with too few decorations and an overly minimalistic and non-customizable "we know best" attitude.
reply