We don't consider warehouses & stores to be a "slippery slope" away from toll roads, so no I really don't see any good faith slippery slope argument that connects enforcing the law against X to be the same as government censorship of ISPs.
I mean even just calling it censorship is already trying to shove a particular bias into the picture. Is it government censorship that you aren't allowed to shout "fire!" in a crowded theater? Yes. Is that also a useful feature of a functional society? Also yes. Was that a "slippery slope"? Nope. Turns out people can handle that nuance just fine.
I wonder if sudo would be better off joining one of those open source foundations instead of staying solo. It's too small to justify a meaningful amount of contribution to these companies, at which point the bureaucratic overhead of dealing with it probably kills the motivation
This is what I've never understood about Apple's argument that they need to be compensated for the R&D and ops costs of running the App Store. They already have this! It's the developer program fee!!
As far as I can tell it wasn't even raised in the Epic case either.
I don't think that applies to Patreon which, as far as I know, doesn't have any ads in the first place?
The app might make it easier for them to enforce DRM-like behaviors to prevent people from pirating creators content, but I strongly suspect people aren't doing that on iOS regardless.
> given that we already had a working GUI. (Maybe that was the intention.)
Neither X11 nor Wayland provide a GUI. Your GUI is provided by GTK or QT or TCL or whatever. X11 had primitive rendering instructions that allowed those GUIs to delegate drawing to a central system service, but very few things do that anymore anyway. Meaning X11 is already just a dumb compositor in practice, except it's badly designed to be a dumb compositor because that wasn't its original purpose. As such, Wayland is really just aligning the protocol to what clients actually want & do.
> Most systems have a way to mostly override the compositor for fullscreen windows and for games
No, they don't. I don't think Wayland ever supported exclusive fullscreen, MacOS doesn't, and Windows killed it a while back as well (in a Windows 10 update like 5-ish years ago?)
Jitter is a non-issue for things you want vsync'd (like every UI), and for games the modern solution is gsync/freesync which is significantly better than tearing.
Isn't that true for even the most basic features you expect from a windowing system? X11 may have come with everything and the kitchen sink, Wayland drops all that fun on the implementations.
They replaced it with "Fullscreen Optimisations", which is mostly the same, but more flexible as leaves detection of fullscreen exclusive windows to the window manager.
In both the GNOME and Windows "Fullscreen Optimizations" it's the compositor doing an internal optimzation to avoid a copy when it's not necessary. In neither scenario is the system nor applications "overriding" or bypassing the compositor. The compositor still has exclusive ownership of the display. And the application's swapchain is still configured as if it was going through a composition pass (eg, it's probably not double-buffered)
"Fullscreen Optimisations" is how X11 has always worked.
Window's actual exclusive fullscreen always caused tons of issues with Alt+TAB because it was designed for a time when you couldn't fit both a game and the desktop in VRAM.
X11 doesn't have an exclusive fullscreen mode either. [*] It's always has relied on compositors and drivers to detect when fullscreen windows can be unredirected. Some programs chose to implement behavior like minimizing on focus loss or grabbing input that is closer to Windows's exclusive fullscreen mode but the unredirecting of the display pipeline doesn't depend on that.
[*] Well, there was an extension (can't recall the name right now) but not much used it and support was dropped at some point.
If you forget to handle a C++ exception you get a clean crash. If you forget to handle a C error return you get undefined behavior and probably an exploit.
Rust is better here (by a lot), but you can still ignore the return value. It's just a warning to do so, and warnings are easily ignored / disabled. It also litters your code with branches, so not ideal for either I-cache or performance.
The ultimate ideal for rare errors is almost certainly some form of exception system, but I don't think any language has quite perfected it.
Only when you don't need the Ok value from the Result (in other words, only when you have Result<(), E>). You can't get any other Ok(T) out of thin air in the Err case. You must handle (exclude) the Err case in order to unwrap the T and proceed with it.
> It also litters your code with branches, so not ideal for either I-cache or performance.
Anyhow erases the type of the error, but still indicates the possibility of some error and forces you to handle it. Functionality-wise, it's very similar to `throws Exception` in Java. Read my post
Poor man's checked exceptions. That's important. From the `?` you always see which functions can fail and cause an early return. You can confidently refactor and use local reasoning based on the function signature. The compiler catches your mistakes when you call a fallible function from a supposedly infallible function, and so on. Unchecked exceptions don't give you any of that. Java's checked exceptions get close and you can use `throws Exception` very similarly to `anyhow::Result`. But Java doesn't allow you to be generic over checked exceptions (as discussed in the post). This is a big hurdle that makes Result superior.
No, it's not quite the same. Checked exceptions force you to deal with them one way or another. When you use `?` and `anyhow` you just mark a call of fallible function as such (which is a plus, but the it's the only plus), and don't think even for a second about handling it.
Checked exceptions don't force you to catch them on every level. You can mark the caller as `throws Exception` just like you can mark the caller as returning `anyhow::Result`. There is no difference in this regard.
If anything, `?` is better for actual "handling". It's explicit and can be questioned in a code review, while checked exceptions auto-propagate quietly, you don't see where it happens and where a local `catch` would be more appropriate. See the "Can you guess" section of the post. It discusses this.
> But how is it different from other tools like doing it manually with photoshop?
Last I checked Photoshop doesn't have a "undress this person" button? "A person could do bad thing at a very low rate, so what's wrong with automating it so that bad things can be done millions of times faster?" Like seriously? Is that a real question?
But also I don't get what your argument is, anyway. A person doing it manually still typically runs into CSAM or revenge porn laws or other similar harassment issues. All of which should be leveraged directly at these AI tools, particularly those that lack even an attempt at safeguards.
RGB strip isn't really better, it's just what cleartype happens to understand. A lot of these OLED developments came from either TV or mobile, neither of which had legacy subpixel hinting to deal with. So the subpixel layouts were optimized for both manufacturing but also human perception. Humans do not perceive all colors equally, we are much more sensitive to green than blue for example. Since OLED is emissive, it needs to balance how bright the color emitted is with how sensitive human wet wear is to it.
> A lot of these OLED developments came from either TV or mobile
I remember getting one of the early Samsung OLED PenTile displays, and despite the display having a higher resolution on-paper than the display on the LCD phone I replaced it with, the fuzzy fringey text made it far less readable in practice. There were other issues with that phone so I was happy to resell it and go back to my previous one.
Pentile typical omits subpixels to achieve the resolution, so yes if you have an LCD and an AMOLED with the exact same resolution and the AMOLED is pentile, it won't be as sharp because it has literally fewer subpixels. But that's rapidly outpaced by modern pentile AMOLEDs just having a pixel density that vastly exceeds nearly any LCD anymore (at least on mobile).
There's RGB subpixel AMOLEDs as well (such as on the Nintendo Switch OLED) even though they aren't necessarily RGB strip. As in, just because it's not RGB strip doesn't mean it's pentile. There are other arrangements. Those other arrangements being for example the ones complained about on current desktop OLED monitors like the one in the article. It's not pentile causing problems since it's not pentile at all.
iPhones all use PenTile and nobody complains about fuzzy text on them. Early generations of pentile weren't that great, but modern ones look fantastic at basically everything. See also everyone considers the iPad Pro to have probably the best display available at any price point - and it's not an RGB strip, either.
The PPI difference matters though (and I think why my Nokia N9's PenTile OLED looked rough). Desktop displays simply aren't at the same PPI/resolution density, which is why they're moving to this new technology.
If it didn't matter, I highly doubt they'd spend the huge money to develop it.
The author is clearly aware of `error.Is` as they use it in the snippet they complain about. The problem is Go's errors are not exhaustive, the equivalent to ENOTDIR does not exist. So you can't `errors.Is` it. And while Stat does tell you what specific error type it'll be in the documentation, that error type also doesn't have the error code. Just more strings!
Is this a problem with Go the language or Go the standard library or Go the community as a whole? Hard to say. But if the standard library uses errors badly, it does provide rather compelling evidence that the language design around it wasn't that great.
X also actively distributes and profits off of CSAM. Why shouldn't the law apply to distribution centers?
reply