Hacker Newsnew | past | comments | ask | show | jobs | submit | kllrnohj's commentslogin

> X is dumb pipe.

X also actively distributes and profits off of CSAM. Why shouldn't the law apply to distribution centers?


There's a slippery slope version of your argument where your ISP is responsible for censoring content that your government does not like.

I mean, I thought that was basically already the law in the UK.

I can see practical differences between X/twitter doing moderation and the full ISP censorship, but I cannot see any differences in principle...


We don't consider warehouses & stores to be a "slippery slope" away from toll roads, so no I really don't see any good faith slippery slope argument that connects enforcing the law against X to be the same as government censorship of ISPs.

I mean even just calling it censorship is already trying to shove a particular bias into the picture. Is it government censorship that you aren't allowed to shout "fire!" in a crowded theater? Yes. Is that also a useful feature of a functional society? Also yes. Was that a "slippery slope"? Nope. Turns out people can handle that nuance just fine.


Google sponsors a lot of open source work: https://opensource.google/organizations-we-support

I wonder if sudo would be better off joining one of those open source foundations instead of staying solo. It's too small to justify a meaningful amount of contribution to these companies, at which point the bureaucratic overhead of dealing with it probably kills the motivation


This is the current list but from a cursory look it lacks GSoC which has been a significant source of new contributors since forever.

Apple was also compensated by Patreon in the form of the developer fee.

This is the triple-dip attempt.


This is what I've never understood about Apple's argument that they need to be compensated for the R&D and ops costs of running the App Store. They already have this! It's the developer program fee!!

As far as I can tell it wasn't even raised in the Epic case either.


I don't think that applies to Patreon which, as far as I know, doesn't have any ads in the first place?

The app might make it easier for them to enforce DRM-like behaviors to prevent people from pirating creators content, but I strongly suspect people aren't doing that on iOS regardless.


> given that we already had a working GUI. (Maybe that was the intention.)

Neither X11 nor Wayland provide a GUI. Your GUI is provided by GTK or QT or TCL or whatever. X11 had primitive rendering instructions that allowed those GUIs to delegate drawing to a central system service, but very few things do that anymore anyway. Meaning X11 is already just a dumb compositor in practice, except it's badly designed to be a dumb compositor because that wasn't its original purpose. As such, Wayland is really just aligning the protocol to what clients actually want & do.


> Most systems have a way to mostly override the compositor for fullscreen windows and for games

No, they don't. I don't think Wayland ever supported exclusive fullscreen, MacOS doesn't, and Windows killed it a while back as well (in a Windows 10 update like 5-ish years ago?)

Jitter is a non-issue for things you want vsync'd (like every UI), and for games the modern solution is gsync/freesync which is significantly better than tearing.


> I don't think Wayland ever supported

Isn't that true for even the most basic features you expect from a windowing system? X11 may have come with everything and the kitchen sink, Wayland drops all that fun on the implementations.

GNOME does unredirect on Wayland since 2019: https://www.reddit.com/r/linux/comments/g2g99z/wayland_surfa...

> Windows killed it

They replaced it with "Fullscreen Optimisations", which is mostly the same, but more flexible as leaves detection of fullscreen exclusive windows to the window manager.

https://devblogs.microsoft.com/directx/demystifying-full-scr...

As far as I can find the update removed the option to turn this of.


In both the GNOME and Windows "Fullscreen Optimizations" it's the compositor doing an internal optimzation to avoid a copy when it's not necessary. In neither scenario is the system nor applications "overriding" or bypassing the compositor. The compositor still has exclusive ownership of the display. And the application's swapchain is still configured as if it was going through a composition pass (eg, it's probably not double-buffered)

> it's the compositor doing an internal optimzation to avoid a copy when it's not necessary.

Yeah, it avoids doing the compositing part of being a compositor. It bypasses the entire pipeline.


"Fullscreen Optimisations" is how X11 has always worked.

Window's actual exclusive fullscreen always caused tons of issues with Alt+TAB because it was designed for a time when you couldn't fit both a game and the desktop in VRAM.


X11 doesn't have an exclusive fullscreen mode either. [*] It's always has relied on compositors and drivers to detect when fullscreen windows can be unredirected. Some programs chose to implement behavior like minimizing on focus loss or grabbing input that is closer to Windows's exclusive fullscreen mode but the unredirecting of the display pipeline doesn't depend on that.

[*] Well, there was an extension (can't recall the name right now) but not much used it and support was dropped at some point.


If you forget to handle a C++ exception you get a clean crash. If you forget to handle a C error return you get undefined behavior and probably an exploit.

Exceptions are more robust, not less.


Yeap. forgetting to propagate or handle an error provided in a return value is very very easy. If you fail to handle an exception, you halt.

For what it's worth, C++17 added [[nodiscard]] to address this issue.

You should compare exceptions to Result-style tagged unions in a language with exhaustiveness checks, like Rust. Not to return codes in C, lmao.

Everyone (except Go devs) knows that those are the worst. Exceptions are better, but still less reliable than Result.

https://home.expurple.me/posts/rust-solves-the-issues-with-e...


Rust is better here (by a lot), but you can still ignore the return value. It's just a warning to do so, and warnings are easily ignored / disabled. It also litters your code with branches, so not ideal for either I-cache or performance.

The ultimate ideal for rare errors is almost certainly some form of exception system, but I don't think any language has quite perfected it.


> you can still ignore the return value

Only when you don't need the Ok value from the Result (in other words, only when you have Result<(), E>). You can't get any other Ok(T) out of thin air in the Err case. You must handle (exclude) the Err case in order to unwrap the T and proceed with it.

> It also litters your code with branches, so not ideal for either I-cache or performance.

That's simply an implementation/ABI issue. See https://github.com/iex-rs/iex/

Language semantics-wise, Result and `?` are superior to automatically propagated exceptions.


> like Rust

Where people use things like anyhow.[0]

[0] https://docs.rs/anyhow/latest/anyhow/


Anyhow erases the type of the error, but still indicates the possibility of some error and forces you to handle it. Functionality-wise, it's very similar to `throws Exception` in Java. Read my post

As a matter of fact I did when it appeared on hn.

>forces you to handle it.

By writing `?`) And we get poor man's exceptions.


Poor man's checked exceptions. That's important. From the `?` you always see which functions can fail and cause an early return. You can confidently refactor and use local reasoning based on the function signature. The compiler catches your mistakes when you call a fallible function from a supposedly infallible function, and so on. Unchecked exceptions don't give you any of that. Java's checked exceptions get close and you can use `throws Exception` very similarly to `anyhow::Result`. But Java doesn't allow you to be generic over checked exceptions (as discussed in the post). This is a big hurdle that makes Result superior.

>Poor man's checked exceptions.

No, it's not quite the same. Checked exceptions force you to deal with them one way or another. When you use `?` and `anyhow` you just mark a call of fallible function as such (which is a plus, but the it's the only plus), and don't think even for a second about handling it.


Checked exceptions don't force you to catch them on every level. You can mark the caller as `throws Exception` just like you can mark the caller as returning `anyhow::Result`. There is no difference in this regard.

If anything, `?` is better for actual "handling". It's explicit and can be questioned in a code review, while checked exceptions auto-propagate quietly, you don't see where it happens and where a local `catch` would be more appropriate. See the "Can you guess" section of the post. It discusses this.


> If you forget to handle a C++ exception you get a clean crash

So clean that there's no stack trace information to go with it, making the exception postmortem damn near useless.


> But how is it different from other tools like doing it manually with photoshop?

Last I checked Photoshop doesn't have a "undress this person" button? "A person could do bad thing at a very low rate, so what's wrong with automating it so that bad things can be done millions of times faster?" Like seriously? Is that a real question?

But also I don't get what your argument is, anyway. A person doing it manually still typically runs into CSAM or revenge porn laws or other similar harassment issues. All of which should be leveraged directly at these AI tools, particularly those that lack even an attempt at safeguards.


RGB strip isn't really better, it's just what cleartype happens to understand. A lot of these OLED developments came from either TV or mobile, neither of which had legacy subpixel hinting to deal with. So the subpixel layouts were optimized for both manufacturing but also human perception. Humans do not perceive all colors equally, we are much more sensitive to green than blue for example. Since OLED is emissive, it needs to balance how bright the color emitted is with how sensitive human wet wear is to it.


> A lot of these OLED developments came from either TV or mobile

I remember getting one of the early Samsung OLED PenTile displays, and despite the display having a higher resolution on-paper than the display on the LCD phone I replaced it with, the fuzzy fringey text made it far less readable in practice. There were other issues with that phone so I was happy to resell it and go back to my previous one.


Pentile typical omits subpixels to achieve the resolution, so yes if you have an LCD and an AMOLED with the exact same resolution and the AMOLED is pentile, it won't be as sharp because it has literally fewer subpixels. But that's rapidly outpaced by modern pentile AMOLEDs just having a pixel density that vastly exceeds nearly any LCD anymore (at least on mobile).

There's RGB subpixel AMOLEDs as well (such as on the Nintendo Switch OLED) even though they aren't necessarily RGB strip. As in, just because it's not RGB strip doesn't mean it's pentile. There are other arrangements. Those other arrangements being for example the ones complained about on current desktop OLED monitors like the one in the article. It's not pentile causing problems since it's not pentile at all.


The article shows mac, it's not just ClearType...

PenTile for example (as another commenter pointed out) was woeful with text, and made things look fuzzy.

I'm not a fan of ClearType, but even on Linux OLED text rendering just isn't as good in my experience (at normal desktop monitor DPI)

Perhaps its down to the algorithms most OSes use instead of just ClearType, but why hasn't it been solved by this point even outside Windows?


iPhones all use PenTile and nobody complains about fuzzy text on them. Early generations of pentile weren't that great, but modern ones look fantastic at basically everything. See also everyone considers the iPad Pro to have probably the best display available at any price point - and it's not an RGB strip, either.


> and it's not an RGB strip, either.

The PPI difference matters though (and I think why my Nokia N9's PenTile OLED looked rough). Desktop displays simply aren't at the same PPI/resolution density, which is why they're moving to this new technology.

If it didn't matter, I highly doubt they'd spend the huge money to develop it.


I dunno, my phone's oled (oneplus 5T) looks perfectly fine even with small fonts...


The author is clearly aware of `error.Is` as they use it in the snippet they complain about. The problem is Go's errors are not exhaustive, the equivalent to ENOTDIR does not exist. So you can't `errors.Is` it. And while Stat does tell you what specific error type it'll be in the documentation, that error type also doesn't have the error code. Just more strings!

Is this a problem with Go the language or Go the standard library or Go the community as a whole? Hard to say. But if the standard library uses errors badly, it does provide rather compelling evidence that the language design around it wasn't that great.


> the equivalent to ENOTDIR does not exist

https://pkg.go.dev/syscall#ENOTDIR


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: