> Going back to the Industrial Revolution the United States has been 100% gas pedal all the time on innovation and disruption
Arguably true, but it's also been way ahead of the pack (people tend to forget this) on protection for organized labor, social safety net entitlements, and regulation of harmful industrial safety and environmental externalities.
That's really not such a weird choice. The systemd library is pervasive and compatible.
The weird bit is the analysis[1], which complains that a Go binary doesn't run on Alpine Linux, a system which is explicitly and intentionally (also IMHO ridiculously, but that's editorializing) binary-incompatible with the stable Linux C ABI as it's existed for almost three decades now. It's really no more "Linux" than is Android, for the same reason, and you don't complain that your Go binaries don't run there.
[1] I'll just skip without explaination how weird it was to see the author complain that the build breaks because they can't get systemd log output on... a mac.
The macOS bit wasn’t about trying to get systemd logs on mac. The issue was that the build itself fails because libsystemd-dev isn’t available. We (naively) expected journal support to be something that we can detect and handle at runtime.
Well... yeah. It's a Linux API for a Linux feature only available on Linux systems. If you use a platform-specific API on a multiplatform project, the portability work falls on you. Do you expect to be able to run your Swift UI on Windows? Same thing!
> I would think making sure outside payment links aren’t scams will be more expensive than that
You really think that the aggregate cost of fraud mitigation in the app store is 30% of revenue? That seems laughable, the credit card industry as a whole does far, far better than that with far less ability to audit and control transaction use.
On a screen, vs. Times New Roman? Absolutely, and it isn't at all close. Serifs on even the highest DPI displays look pretty terrible when compared with print, and lose readability tests every time they're measured.
One of the things that image shows is the slightly higher density of the Times version (compare row by row) allowing the paper to put more text on a page and thus reduce some of the costs.
This appears to be done by increasing the height of the lower case letters in the Times side while reducing the height of the capital letters at the same time. This then was also combined with a reduction in the size of some of the serifs which are measured against the height of the lowercase letter (compare the 'T' and the following 'h').
The Times is similarly readable at the smaller font size than the modern serif font - and scaling the modern font to the same density of text would have made the modern font less readable.
Part of that, it appears is the finer detail (as alluded to in the penultimate paragraph) - compare the '3' on each side.
> the slightly higher density of the Times version (compare row by row)
I don't think that's the comparison you want to draw? The rows appear to hold very similar amounts of text.
But the rows on the left, in Times New Roman, are shorter than the rows on the right. So even though "one row" holds the same amount of text, one column-inch of Times New Roman holds more rows.
The Times New Roman looks more readable to me because it has thicker strokes. This isn't really an issue in a digital font; you can't accidentally apply a thin layer of black to a pixel and let the color underneath show through.
> Calibri font has "I" and "l" the same, according to Wikipedia. A better font should avoid characters being too similar (such as "I" and "l" and "1").
Only when used in a context where they can be confused. This is a situation where HN is going to give bad advice. Programmers care deeply about that stuff (i.e. "100l" is a long-valued integer literal in C and not the number 1001). Most people tend not to, and there is a long tradition of fonts being a little ambiguous in that space.
It's not, although blind or highly vision impared people who use screen readers sometimes also have to rely on OCR when the document isn't properly formatted with text.
Using a sans serif font generally helps anyone with difficulty distinguishing letters so dyslexic, low vision, aging vision etc. individuals. It's not just for digital OCR.
> Using a sans serif font generally helps anyone with difficulty distinguishing letters so dyslexic, low vision, aging vision etc.
So far as I'm aware, there is very little actual evidence to support this oft-repeated claim. It all seems to lead back to this study of 46 individuals, the Results section of which smells of p-hacking.
> Yeah because normal people never have to deal with alphanumeric strings...
Natural language tends to have a high degree of disambiguating redundancy and is used to communicate between humans, who are good at making use of that. Programming languages have somewhat less of disambiguating redundancy (or in extreme cases almost none), and, most critically, are used to communicate with compilers and interpreters that have zero capacity to make use of it even when it is present.
This makes "letter looks like a digit that would rarely be used in a place where both make sense" a lot more of a problem for a font used with a programming language than a font used for a natural language.
Legal language is natural language with particular domain-specific technical jargon; like other uses of natural language, it targets humans who are quite capable of resolving ambiguity via context and not compilers and interpreters that are utterly incapable of doing so.
Not that official State Department communication is mostly “legal language” as distinct from more general formal use of natural language to start with.
No, because normal people can read "l00l" as a number just fine and don't actually care if the underlying encoding is different. AI won't care either. It's just us on-the-spectrum nerds with our archaic deterministic devices and brains trained on them that get wound up about it. Designing a font for normal readers is just fine.
You lost this fight more than a century ago. Helvetica and almost all related grotesque fonts lack a serif on "I", and dominate modern typography. You see them everywhere, on every device. Pull your phone out your pocket and see if you can see "crossbars" on the I. They're not there, and never have been.
And people like it this way! So that's why we design fonts like this.
"Only when used in a context where they can be confused."
So what are you supposed to when you're typing along and suddenly you find yourself in such a context? Switch the font of that one occurrence? That document? Your whole publishing effort?
Capital "i"s without crossbars aren't capital "i"s. They're lower-case Ls. Any font that doesn't recognize this should be rejected.
> Capital "i"s without crossbars aren't capital "i"s. They're lower-case Ls. Any font that doesn't recognize this should be rejected.
You have asserted this at least thrice in the past thirty minutes. What makes you feel so strongly about it? "Rejected" for what purpose? Do you understand that you've just trashed Helvetica, to take a famous example?
Not over the long term, no. There may have been a recent uptick in the post-pandemic US but it's mostly just noise. Fatalities per mile driven have been going down markedly in recent decades. Driving was twice as dangerous in the 80's as it is now.
You are incorrect. Fatalities in the US leveled out in the early 2010s and have been climbing since then. In all other developed nations they continued trending downwards.
This is not a statistical anomaly that can be handwaved by pointing out that things were worse 40 years ago. Roads in the US are uniquely lethal and getting moreso.
Sigh. I hate that phrasing. But OK, fine: you are misreading me, misanalysing the data, or just plain spinning to mislead readers.
Fatalities per capita and per mile driven go steadily downward until covid, and maybe there's a bump after that: https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in... If you have numbers (you don't cite any) showing otherwise, they are being polluted by demographic trends (the US having higher population growth doesn't say anything about driver behavior).
> Roads in the US are uniquely lethal and getting moreso.
So spinning it is. Would you rather drive in Germany in 2002 or the US in 2025? Seems like "uniquely lethal" doesn't really constitute a good faith representation of the truth.
According to the link that you posted, the roads in Germany in 2002 were quite a bit safer than the roads are in the USA in 2025. And they don’t have speed limits. Absolute no brainer to me.
Anyway, not to pile on but you are absolutely incorrect. Forgive the phrasing.
German roads absolutely do have speed limits. Only certain rural sections of the Autobahn don't, but that's not representative of the country as a whole, or even the Autobahn as a whole.
Did you open the wikipedia article you linked? The first image contradicts you, see the caption:
> Per capita road accident deaths in the US reversed their decline in the early 2010s.
Amusing that you accuse me of bad faith framing and then pose a nonsense question like this:
> Would you rather drive in Germany in 2002 or the US in 2025?
I cannot time travel and neither can you. The comparison that matters is US in 2025 vs other developed nations in 2025, and with that framing the US is uniquely lethal.
Of course, a good faith reader of my comment would understand this, but we already know that's not you since you did the research and have decided to be wrong anyway.
> They completely revolutionized laptop processors
Tough love: no, they didn't. 99.9% of consumers simply can't detect a performance difference between an M4 Air and a junky Asus box (and what ones can will announce that games run much better on the windows shipwreck!), and while the Air has a huge power delta no one cares because the windows thing still lasts for 6+ hours.
Apple absolutely ran ahead of the industry technically, by a shocking amount. But in a commoditized field that isn't sensitive to quality metrics, that doesn't generate sales.
There's a reason why the iPhone remains the dominant product but macs are stuck at like 9% market share, and it's not the technlogy base that is basically the same between them.
Laptops are done, basically. It's like arguing about brands of kitchen ranges: sure, there are differences, but they all cook just fine.
> Tough love: no, they didn't. 99.9% of consumers simply can't detect a performance difference between an M4 Air and a junky Asus box (and what ones can will announce that games run much better on the windows shipwreck!), and while the Air has a huge power delta no one cares because the windows thing still lasts for 6+ hours.
This wildly, comically untrue in my experience: all of the normal people I know loooooove how fast it is and charging a few times a week. It was only the people who self-identify as PC users who said otherwise, much like the Ford guys who used to say Toyotas were junk rather than admit their preferred brand was facing tough competition.
Your "normal people" are mac owners, and your other group is "PC users". You're measuring the 0.1%! (Which, fine, is probably more like 15% or whatever. Still not a representative sample.) You're likely also only sampling US consumers, or even Californians, and so missing an awful lot of the market.
Again, real normal people can't tell the difference. They don't care. And that's why they aren't buying macs. The clear ground truth is that Macintosh is a lagging brand with poor ROI and no market share growth over more than a decade. The challenge is explaining why this is true despite winning all the technical comparisons and being based on the same hardware stack as the world-beating iOS devices.
My answer is, again, "users don't care because the laptop market is commoditized so they'll pick the value product". You apparently think it's because "users are just too dumb to buy the good stuff". Historically that analysis has tended to kill more companies than it saves.
> Your "normal people" are mac owners, and your other group is "PC users”
No. Remember that Apple sells devices other than Macs: they were all non-IT people who liked their iPhones and figured they’d try a Mac for their next laptop and liked it. One thing to remember is that Windows is a lot less dominant when you’re looking at what people buy themselves as opposed to what an enterprise IT department picked out. There are a ton of kids who start with ChromeOS or iPads, got a console for gaming, and don’t feel any special attraction to Windows since everything they care about works on both.
> You apparently think it's because "users are just too dumb to buy the good stuff".
Huh? Beyond being insulting, this is simply wrong. My position is that people actually do consider fast, silent, and multi-day battery life as desirable. That’s not the only factor in a buying decision, of course, but it seems really weird not to acknowledge it after the entire PC industry has spent years in a panic trying to catch up.
Best I can tell you're arguing that 9% market share by units sold is some kind of failure. Now go look at who has the highest market share by revenue. Hint: it's a fruit company.
This whole take might make sense if Apple didn’t double their laptop market share from like 10% to 20% when the M1 series came out, which actually happened.
That's kind of a weird one because the PC market has notably regressed there over the past few years. Other than the Surface Pro 12 there've been no fanless PC laptops released since 2022-ish, when there used to be dozens.
On a technical basis, fanless PC laptops released now would be better than the ones in 2022 just on the basis of 2022 lineup having a moribund lineup of CPUs (Snapdragon SQ1, Amber Lake, etc.) You could release a lineup now that would be broadly competitive with the M1 at least, but it doesn't seem to be a market segment that PC OEMs are interested in.
Right, so, a K-12 education-oriented PC with an Intel N-series chip, about 1/3 as fast as what you get with an M4 (or worse).
When I asked my snarky question I'm really talking about "fanless laptops that someone would actually want to use and get some serious use out of."
The regression of the PC market is because the PC market didn't see the ARM train coming from a million miles away and just sat there and did nothing. They saw smartphones performing many times more efficiently than PCs and shrugged their arms at it.
Meanwhile, Apple's laptop marketshare has purportedly doubled from 10% to 20% or perhaps even higher since the M1 lineup was released.
I say this as someone who actually moved away from Apple systems to a Linux laptop. Don't get me wrong, modern Intel and AMD systems are actually impressively efficient and can offer somewhat competitive experiences, but the MacBook Air as an every-person's experience is really tough to beat (consider also, you could get a MacBook Air M2 for $650 during the most recent Black Friday sales, and you'd have a really damn hard time finding any sort of PC hardware that's anywhere near as nice, never mind match it on performance/battery life).
Yeah, like we're in agreement about the current state of the market, I just don't think it has to be that way. The Surface Pro 12 is fanless, so presumably anyone else could make a fanless Snapdragon laptop if they wanted to. (My daily driver work laptop is Windows-on-ARM, and most everything works pretty well on it.)
I believe the whole Vivobook Go line is fanless, actually.
But again, the point isn't to get into a shouting match over whose proxied anatomy is largest. It's to try to explain why the market as a whole doesn't move the way you think it should. And it's clearly not about fans.
No, he's missed the mark here. The retreat to imprecise non-statements like "ChatGPT cannot know or understand anything, so it is not intelligence" is a flag that you're reading ideology and not analysis. I mean, it's true as far as it goes. But it's no more provable than it is for you or I, it's just a dance around the semantics of the words "understand" and "know".
In particular the quip that it's really just a "bullshit generator" is 100% correct. But also true for, y'know, all the intelligent humans on the planet.
At the end of the day AI gets stuff wrong, as far as we can tell, for basically the same reasons that we get stuff wrong. We both infer from intuition to make our statements about life, and bolt "reasoning" and "logic" on as after the fact optimizations that need to be trained as skills.
(I'm a lot more sympathetic to the free software angle, btw. The fact that all these models live and grow only within extremely-well-funded private enclosures is for sure going to have some very bad externalities as the technology matures.)
He's not comparing them to humans, he attributes knowledge/understanding (enough to pass his bar for "is AI") to yolov5 models, xgboost trained trees and as far as I can tell closed source transformer based models too. But not ChatGPT.
Just as people once hated being told that Earth isn't the center of the universe, so goes the ego when it comes down to discovering the origin of our thoughts.
The point was surely more that apps being exploited via the Play Store can be mitigated there without client OS updates. The only hole here requiring the update needs a sideloaded attack.
Except the Play Store is a hot mess, and Google does little to no review of apps. Trusted repositories work best when the repository maintainers build and read the code themselves, like on f-droid or Debian. What Google and Apple are doing with their respective stores is security theater. I would not be surprised if they don't even run the app.
Again though, that's mixing things up. The question is whether or not mitigating the exploit requires an OS patch be applied promptly.
And it seems like it doesn't. If there is a live exploit in the wild (as seems to be contended), then clearly the solution is to blacklist the app (if it exists on the store, which is not attested) and pull it off the store. And that will work regardless of whether or not Samsung got an update out. Nor does it require an "audit" process in the store, the security people get to short circuit that stuff.
The key bindings are sort of the least impactful idea behind the editor. The defaults are indeed ancient and opinionated, and don't match well with what other environments ended up adopting. They do work well for the most part if you want to take the time to learn them, though. But everyone has their own set of customizations[1], usually going along with a preferred set of physical key remappings[2]. Lots of folks use modes like Evil to get vi bindings, etc...
The point is to think hard about how you use the editor and start working on making the editor work the way you need it to. Binding fluency will fall out naturally.
[1] For myself, I'm mostly default. But for example I dislike the defaults for C-t and C-z which IMHO are footguns, and remap those to "top/end of buffer", functions I do use a lot and whose defaults are clumsy.
[2] Ctrl to the left of A, obviously. But also the key below '/' really wants to be Meta and not whatever else the manufacturer put there.
Arguably true, but it's also been way ahead of the pack (people tend to forget this) on protection for organized labor, social safety net entitlements, and regulation of harmful industrial safety and environmental externalities.
This statement is awfully one-sided.
reply