If Mozilla were to kill adblockers, there's basically no reason to not use Chromium. It's pretty much the only relevant difference between Chromium and Firefox these days.
It's truly impressive how they've managed to do every user-hostile trick Google Chrome also did over the years, except for no real clear reason besides contempt for their users autonomy I suppose. Right now the sole hill Mozilla really has left is adblockers, and they've talked about wanting to sacrifice that?
It truly boggles the mind to even consider this. That's not 150 million, that's the sound of losing all your users.
Insane that they're dropping client certificates for authentication. Reading the linked post, it's because Google wants them to be separate PKIs and forced the change in their root program.
They aren't used much, but they are a neat solution. Google forcing this change just means there's even more overhead when updating certs in a larger project.
The certification serves different purposes. It might feel like a symmetric arrangement but it isn't. On the whole i think implementing this split is sensible.
It's a good change. I've seen at least one company that had misconfigured mTLS to accept any client certificate signed by a trusted CA, rather than just by the internal corporate CA.
I (partially) agree that it is a good change, but for a different reason. For security purposes, the certificates should include only the permissions that are required (although maybe they ought to allow you to have certificates that include both if you have a use for it (which as I have mentioned, you usually should not need because you will probably want to use different certificates instead), but unfortunately they do not allow that).
Is that a temporary situation? Is it that big a deal to implement a separate set of roots for client certs? Or do you mean that the entire infrastructure is supposed to be duplicated?
I think client certificates are a good idea, although it is usually more useful to use different certificates than those for the domain names, I think. (I still think CA/Browser Forum is not very good, despite that; however, I still want to mention my point.)
It's technically possible to get any Android app to accept user CAs. Unfortunately it requires unpacking it with apktool, adding a networkconfigoverride to the XML assets and pointing the AndroidManifest.xml to use it. Then restitch the APK with apktool, use jarsigner/apksigner and finally use zipalign.
Doesn't need a custom ROM, but it's so goddamn annoying that you might as well not bother. I know how to do these things; most users won't and given the direction the big G is heading in with device freedom, it's not looking all that bright for this approach either.
For a lot of developers, the current biggest failure of open source is the AWS/Azure/GCP problem. BigCloud has a tendency to just take well liked open source products, provide a hosted version of them and as a result they absolutely annihilate the market share of the entity that originally made the product (which usually made money by offering supported and hosted versions of the software). Effectively, for networked software (which is the overwhelming majority of software products these days) you might as well use something like BSD/MIT rather than any of the GPLs[0] because they practically have the same guarantees; it's just that the BSD/MIT licenses don't contain language that makes you think it does stuff it actually doesn't do. Non-networked software like kernels, drivers and most desktop software don't have this issue, so it doesn't apply.
Open source for that sort of product (which most of the big switches away from open source have been about) only further entrenches BigCloud's dominance over the ecosystem. It absolutely breaks the notion that you can run a profitable business on open source. BigCloud basically always wins that race even if they aren't cheaper because the company is using BigCloud already, so using their hosted version means cutting less yellow tape internally since the difficulty of getting people to agree on BigCloud is much lower compared to adding a new third party you have to work with.
The general response to this issue from the open source side tends to just be to accuse the original developers of being greedy/only wanting to use the ecosystem to springboard their own popularity.
---
I should also note that this generally doesn't apply to the fight between DHH and Mullenweg that's described in the OP. DHH just wants to kick a hornets nest and get attention now that Omarchy isn't the topic du jour anymore - no BigCloud (or for this case, shared hosting provider is probably more likely) is going to copy a random kanban tool written in Ruby on Rails. They're copying the actual high profile stuff like Redis, Terraform and whatever other examples you can recently think of that got screwed by BigClouds offering their services in that way (shared providers pretty much universally still use the classic AMP stack, which doesn't support a Ruby project, immunizing DHHs tool from that particular issue as well). Mullenweg by contrast does have to deal with Automattic not having a stranglehold on being a WordPress provider since the terms of his license weren't his to make to begin with; b3/cafelog was also under GPL and WordPress inherited that. He's been burned by FOSS, but it's also hard to say he was surprised by it, since WP is modified from another software product.
[0]: Including the AGPL, it doesn't actually do what you think it does.
It's not impossible to run a publicly owned company in the US that isn't insanely hostile towards it's customers or employees... it's just really damn difficult because of bad legal precedent.
Dodge v. Ford is basically the source of all these headaches; the Dodge Brothers owned shares in Ford. Ford refused to pay the dividends he had to pay to the Dodge Brothers, suspecting that they'd use the dividends to start their own car company (he wasn't wrong about that part). The Dodge Brothers sued Ford, upon which Fords defense for not paying out dividends was "I'm investing it in my employees" (an obvious lie, it was very blatantly about not wanting to pay out). The judge sided with the Dodge Brothers and the legal opinion included a remark that the primary purpose of a director is to produce profit to the shareholders.
That's basically become US business doctrine ever since, being twisted into the job of the director being to maximize profits to the shareholders. It's slightly bunk doctrine as far as I know; the actual precedent would mostly translate to "the shareholders can fire the directors if they think they don't do a good job" (since it can be argued that as long as any solid justification exists, producing profit for the shareholders can be assumed[0]; Dodge v. Ford was largely Ford refusing to follow his contracts with money that Dodge knew Ford had in the bank), but nobody in the upper areas of management wants to risk facing lawsuits from shareholders arguing that they made decisions that go against shareholder supremacy[1]. And so, the threats of legal consequences morph into the worst form of corporate ghoulishness that's so pervasive across every publicly traded company in the US. It's why short-term decision making dominates long-term planning for pretty much every public company.
[0]: This is called the "business judgement rule", where courts will broadly defer the judgement on if a business is ran competently or not to the executives of that business.
[1]: Tragically, just because it's bunk legal theory, doesn't change that the potential and disastrous consequences of lawsuits in the US are a very real thing.
It is not broadly believed in corporate governance circles that there is a legal requirement to maximize shareholder value. Nor will you find court judgements that require it.
If anything Milton Friedman is more responsible for this idea that shareholder maximizing is the corporate goal. That is an efficient market argument though not a legal one and he framed it long after the dodge suit. He needed to frame that argument because so many firms were _not_ doing that.
But just because a Chicago school economist says something about governance doesn’t mean it’s broadly applicable in the same way an Austrian economists opinions about inflation aren’t iron rules about monetary policy.
Taking out the public leaderboard makes sense imo. Even when you don't consider the LLM problem, the public Leaderboard's design was never really suited for anyone outside of the very specific short list of (US) timezones where competing for a quick solution was every feasible.
One thing I do think would be interesting is to see solution rate per hour block. It'd give an indication of how popular advent of code is across the world.
Yes: I'd argue that the timings actually work/worked better for Western Europe than the USA, I personally preferred doing the puzzle at 5am (UK) than the midnight equivalent, as I could finish before work (on a good day).
Nearly scratched a decent ranking once only, top 300 or so.
Either Russia (8am) or West Coast US (9pm) would be my preferred options.
Sadly it's 5am for me as I'm in the UK.
In 8 years I can say I've never once tried to be awake at 5am in order to do the puzzle. The one time I happened to still be awake at 5am during AoC I was quite spectacularly drunk so looking at AoC would have been utterly pointless.
Anything before 6.45am and I'm hopefully asleep. 7am isn't great as 7am-8am I'm usually trying to get my kid up, fed and out the door to go to school. Weekends are for not waking up at 7am if I don't need to.
9am or later and it messes with the working day too much.
Looking back at my submission times from 2017 onwards (I only found AoC in 2017 so did 2015/2016 retrospectively) I've only got two submissions under 02:xx:xx (e.g. 7am for me). Both were around 6.42am so I guess I was up a bit earlier that day (6.30am) and was waiting for my kid to wake up and managed to get part 1 done quickly.
My usual plan was to get my kid out of the door sometime between 7.30am and 8am and then work on AoC until I started work around 9am. If I hadn't finished it then I'd get a bit more time during my lunch hour and, if still not finished, find some time in the evening after work and family time.
Out of the 400 submissions from 2017-2024 inclusive I've only got 20 that are marked as ">24h" and many of these were days where I was out for the entire day with my wife/kid so I didn't get to even look at the problem until the next day. Only 4 of them are where I submitted part 1 within 24h but part 2 slipped beyond 24h.
Enormous understatement: I were unencumbered by wife/kids then my life would be quite a bit different.
LLMs spoiled it, but it was fun to see the genuine top times. Watching competitive coders solve in real time is interesting (Youtube videos), and i wouldn't have discovered these without the leader board.
> Furthermore, current copyright terms are decades past the death of the creator.
It's important to recognize why this is the case - a lot of the hubbub around posthumous copyright comes from the fact that a large amount of classic literature often went unrecognized during an author's lifetime (a classic example is Moby Dick, which sold and reviewed poorly - Melville only made 1260$ from the book in total and his wife only made ~800$ from it in the remaining 8 years it remained under copyright after Melville died, even though it's hard to not imagine it on a literature list these days). Long copyright terms existed to ensure that the family of an author didn't lose out on any potential sales that would come much later. Even more recent works, like Lord of the Rings also heavily benefitted from posthumous copyright, as it allowed Tolkien's son to actually make the books into the modern classics they are today, through carefully curating the rereleases and additions to the work (the map of Middle Earth for instance was drawn by Tolkien's son.)
It's mostly a historic example though; Copyright pretty blatantly just isn't designed with the internet in mind. Personally I think an unconditional 50 years is the right timeline for copyright to end. No "life+50"; just 50.
50 years of copyright should be more than enough to get as much mileage out of a work as possible, without running into the current insanity where all of the modern worlds cultural touchstones are in the hands of a few megacorporations. For reference, 50 years means that everything before 1975 would no longer be under copyright today, which seems like a much fairer length to me. It also means that if you create something popular, you have roughly the entire duration of a person's working life (starting at 18-23, ending at 65-70) to make money from it.
Long copyright also means that the estate can control the work- like how Tolkien's son guarded lord of the rings like a hawk.
And I also understand Disney's point of view. Imagine you invested a lot of money into a franchise and the original author suddenly goes crazy and makes Roger the Rabbit a Klansman.
Although personally I would put the protection at 10 years.
In the modern world, some sort of reasonable fixed duration seems to make a lot of sense. An elderly author cranking out a work partly for the benefit of a soon-to-be widow/widower isn't insane. You can argue about exact timeframes and details but some sort of duration after creation (maybe not less than life of creator) probably works pretty well.
I think it would make more sense to simply start taxing copyright as property, if we insist on "intellectual property" in the first place. Have an exemption in place for the first few years, then gradually ramp it up - the longer you keep something out of public domain, the more it costs to do so every year. Use those funds to subsidize new works released under permissive licenses.
Grades aren't necessarily an indicator on if a person comprehends the educational material. Someone can visibly under-perform on general tests, but when questioned in-person/made to do an exam still recite the educational material from the top of their head, apply it correctly and even take it in a new direction. Those are underachievers; they know what they can do, but for one reason or another, they simply refuse to show it (a pretty common cause tends to be just finding the general coursework to be demeaning or the teachers using the wrong education methods, so they don't put a lot of effort into it[0].) Give them coursework above their level, and they'll suddenly get acceptable/correct results.
IQ can be used somewhat reliably to identify if someone is an underachiever, or if they're legitimately struggling. That's what the tests are made and optimized for; they're designed to test how quickly a person can make the connection between two unrelated concepts. If they do it quick enough, they're probably underachieving compared to what they actually can do and it may be worth trying to give them more complicated material to see if they can actually handle it. (And conversely, if it turns out they're actually struggling, it may be worth dedicating more time to help them.)
That's the main use of it. Anything else you attach to IQ is a case of correlation not being causation, and anyone who thinks it's worth more than that is being silly. High/Low IQ correlates to very little besides a sort of general trend on how quickly you can recognize patterns (because of statistical anomaly rules, any score outside the 95th percentile is basically the same anyways and IQ scores are normalized every couple years; this is about as far as you can go with IQ - there's very little difference between 150/180/210 or whatever other high number you imagine).
It keeps astounding me that people assign value to a score whose purpose was mainly intended to find outliers in the education system as being anything besides that.
Or to quote the late astrophysicist Stephen Hawking: "People who boast about their IQ are losers".
I've never understood IQ tests. Granted, i have only seen the beginning of some of them where they show you those 3/4 figures and tell you to choose the "next one" ... I have never had a clue of what am I supposed to look for.
Maybe I am just extremely stupid. But then I'm a walking example that you can be averagly successful even being dumb as a rock, if you are stubborn enough haha.
The problem with a standard video element is that while it's mostly nice for the user, it tends to be pretty bad for the server operator. There's a ton of problems with browser video, beginning pretty much entirely with "what's the codec you're using". It sounds easy, but the unfortunate reality is that there's a billion different video codecs (and a heavy use of Hyrum's law/spec abuse on the codecs) and a browser only supports a tiny subset of them. Hosting video already at a basis requires transcoding the video to a different storage format; unlike a normal video file you can't just feed it to VLC and get playback, you're dealing with the terrible browser ecosystem.
Then once you've found a codec, the other problem immediately rears its head: video compression is pretty bad if you want to use a widely supported codec, even if for no other reason than the fact that people use non-mainstream browsers that can be years out of date. So you are now dealing with massive amounts of storage space and bandwidth that are effectively being eaten up by duplicated files, and that isn't cheap either. To give an estimate, under most VPS providers that aren't hyperscalers, a plain text document can be served to a couple million users without having to think about your bandwidth fees. Images are bigger, but not by enough to worry about it. 20 minutes of 1080p video is about 500mb under a well made codec that doesn't mangle the video beyond belief. That video is going to reach at most 40000 people before you burn through 20 terabytes of bandwidth (the Hetzner default amount) and in reality, probably less because some people might rewatch the thing. Hosting video is the point where your bandwidth bill will overtake your storage bill.
And that's before we get into other expected niceties like scrolling through a video while it's playing. Modern video players (the "JS nonsense" ones) can both buffer a video and jump to any point in the video, even if it's outside the buffer. That's not a guarantee with the HTML video element; your browser is probably just going to keep quietly downloading the file while you're watching it (eating into server operator cost) and scrolling ahead in the video will just freeze the output until it's done downloading up until that point.
It's easy to claim hosting video is simple, when in practice it's probably the single worst thing on the internet (well that and running your own mailserver, but that's not only because of technical difficulties). Part of YouTube being bad is just hyper capitalism, sure, but the more complicated techniques like HLS/DASH pretty much entirely exist because hosting video is so expensive and "preventing your bandwidth bill from exploding" is really important. That's also why there's no real competition to YouTube; the metrics of hosting video only make sense if you have a Google amount of money and datacenters to throw at the problem, or don't care about your finances in the first place.
Chrome desktop has just landed enabled by default native HLS support for the video element within the last month. (There may be a few issues still to be worked out, and I don't know what the rollout status is, but certainly by year end it will just work). Presumably most downstream chromium derivatives will pick this support up soon.
My understanding is that Chrome for Android has supported it for some time by way of delegating to android's native media support which included HLS.
Desktop and mobile Safari has had it enabled for a long time, and thus so has Chrome for iOS.
Any serious video distribution system would not use metered bandwidth. You're not using a VPS provider. You are colocating some servers in a datacenter and buying an unmetered 10 gigabit or 100 gigabit IP transit service.
It's truly impressive how they've managed to do every user-hostile trick Google Chrome also did over the years, except for no real clear reason besides contempt for their users autonomy I suppose. Right now the sole hill Mozilla really has left is adblockers, and they've talked about wanting to sacrifice that?
It truly boggles the mind to even consider this. That's not 150 million, that's the sound of losing all your users.
reply