Where did you get that idea? These certs have always been intended for any TLS connection of any application. They are also in no way specific or "designed for" HTTPS. Neither the industry body formed from the CAs and software vendors, nor the big CAs themselves are against non-HTTPS use.
> Welcome to the CA/Browser Forum
>
> The Certification Authority Browser Forum (CA/Browser Forum) is a voluntary gathering of Certificate Issuers and suppliers of Internet browser software and other applications that use certificates (Certificate Consumers).
> Does Let’s Encrypt issue certificates for anything other than SSL/TLS for websites?
>
> Let’s Encrypt certificates are standard Domain Validation certificates, so you can use them for any server that uses a domain name, like web servers, mail servers, FTP servers, and many more.
Cool, is the core sales pitch still a lie on Linux? Yes? Lovely, no thanks then.
</snark>
The big appeal of Tauri is that you don't need to ship a webview with every app. On Linux, Tauri not only ships its own webview, it's also an old and fundamentally broken webview. So you get fast and small apps on every platform, but huge and slow apps on Linux.
I'm not saying it's their fault. It's just not something they're interested in fixing the right way and that's their choice. But the false advertising is entirely their fault.
The big appeal for me was that Tauri didn't ship an entire Chrome browser to make it work. It never even occurred to me to gauge the webview used in such detail.
> On Linux, Tauri not only ships its own webview, it's also an old and fundamentally broken webview
I'd love to hear some details on this. What is Tauri shipping now and what should it ship instead?
I agree, that's the biggest appeal. But on Linux, there isn't really a "system webview", so they use webkit2gtk. Most systems happen to have this installed as a dependency for something else, so it's a reasonable choice.
The thing is, that library is based on an ancient version of webkit, which is slow and lacks some modern web features. There are some open issues about it and the response is "yea, we know, we're doing the best with what we have", which is fairly reasonable.
A secodary complicating factor is that the main "universal binary" for Linux is AppImage, which by design requires you to ship all the dependencies. So you end up with the eorst of both worlds: you're still shipping an entire webview with every app, just like Electron, while unlike Electron, which is based on recent Chromium, the webview is based on outdated Webkit.
There have been some attempts to bundle CEF (basically Chromium) instead of Webkit and there is also a testing branch that uses Servo, but those only solve the second issue.
Ideally, the Linux ecosystem would standardise on a webview implementation and Tauri could link to that, just like they link to Webkit on macOS and Edgeium on Windows. It could be based on Blink (Chromium) or Gecko (Firefox) or even better, it could be just a standard interface and the use could pick their implementation. But since the Tauri folks would be the first and for a while only people using it, they'd probably have to do most of the work themselves.
Might help to have a companion app that uses the same embedded webview that is nearly indispensable at least for gui distros... something akin to MS Compiled Help (CHM) ... which I always thought was a pretty great idea.
I mean... it'd be a trip down the MS route, but maybe working with the Cosmic devs on this one... getting a baseline webview in place at the core, tooling support for help, email, etc. Getting Cosmic, Gnome and KDE all on board would be a massive boost and cover most users.
I really think most of the criticism towards embedded Browser engines would be moot if there was an engine where anything unrelated to layout and rendering had to be imported piece by piece. Most of the time, we just desire what HTML and CSS give us (layout and styling) and an element node API in the DOM, or something like that. So many other things get wrapped into even the most stripped down browser engines that don't really have to do with layout and styling which increases the bloat. I don't really see why we can't have a GUI toolkit that just renders HTML and CSS and only be in the dozens of megabytes. I don't care that lots of existing Node modules wouldn't work out of the box. Give me HTML rendering without the kitchen sink. It seems we aren't capable of this. From what I can tell, this can't even be done easily enough with Servo.
That's what Sciter does - https://sciter.com/ - it just gives you a lightweight HTML / CSS / Javascript "webview" engine for layout and rendering. Like you pointed out, that should be enough. But corporates want a "webview" that is an OS so that they can do everything with Javascript on it (hence why embedded Chrome with NodeJS is so popular).
Why are y'all so scared only when it's the government using the companies to influence people. The companies do it themselves already and in a much more insidious way than any government likely will.
You are already being fed propaganda and having your interactions controlled and monitored in order for the people in power to gain more power and stay in power indefinitely. This is already almost 1984. It's just not politicians in power, it's capitalists.
How is that better? At least we can, in theory, elect different politicians. With capitalists, that doesn't exist even in theory.
Don't forget the second half of that feedback loop: other manufacturers come out with their poor approximations of those features at lower prices, consumption shifts to that because quality isn't clear from the labels, the quality manufacturers don't move enough volume to hit similar prices, so they end up either killing them or cutting corners.
So then it becomes a cycle. It's risky to make a high quality initial product that's expensive because it requires the buyer to understand and trust why they should pay more.
Eventually the market demands the higher quality and the pro series gains adoption, only for the the cheap stuff to come in again.
People definitely care about things that a more open platform brings you, but today's open platforms have really bad downsides. The thing is, those downsides are artificial. They were manufactured by the corporations that prefer to be in control of our devices. It's not the natural state of things.
I often get asked by friends and family "can I get rid of annoyance X" or "can I have feature Y" on their Android phones, usually because they see that I've done it on my phone [0]. The answer is always "yes, I can set that up for you, but this will take an hour, I need to wipe all your data and a bunch of your apps will stop working".
There is no reason it should be like that. That was a choice by the manufacturers. They developed these DRM features and actively market them to developers - to the point where I can't submit an update to my little bus app without getting a prompt to add SafetyNet to it. They even somehow concinced pentesters to put "no cert pinning, root check and remote attestation" into their reports, so bank and government apps are the worst offenders.
It's not like people decided they prefer closed to open. They prefer working to non-working. And open platforms were broken intentionally by the developers of the closed ones.
It's like saying Americans all love their cars and simply decided not to use public transport. No, their public transport was crippled to the point of uselessness and their neighbourhoods were built in a way that makes public transport unfeasible. Cars work for them and trains don't. This was not their choice and it's painfully obvious when you see them go literally anywhere else on the planet and be amazed at how great trains are.
[0] Things like: global adblock, removing bloatware, floating windows or splitsceen, miracast, slide for brightness/volume, modded apps, lockscreen gestures, app instances, working shared clipboard, NFC UID emulation, automatic tethering, audio EQ...
Sure people will care about things on paper or in conversation, but my point is that most don't care enough to do anything about it.
> There is no reason it should be like that
Most businesses exist primarily to make money, so they have all the reasons for their bad designs and behavior.
> They prefer working to non-working
Of course, but TANSTAAFL. We keep rewarding the providers with our money and data, so the beatings will continue if you want to keep up with the Joneses.
I hear the point you're making with the comparison to transportation, but you can't just build a road or a railway, while you can absolutely build software.
I think the point is more "in order to prevent people from scraping their site, which is against their ToS, they scraped some other site, against its ToS".
How is it not? For all but some old and insecure or fairly exotic setups, DKIM/DMARC validates the sender server is authorised for that domain and the server's account-based outbound filtering validates it was sent by the owner of that mailbox.
If the sending server doesn't do DKIM, it's fundamentally broken, move your email somewhere else. If the sending server lets any user send with an arbitrary local part, that's either intended and desired, or also fundamentally broken. If there are other senders registered on the domain with valid DKIM and you can't trust them, you have bigger problems.
> If the sending server doesn't do DKIM, it's fundamentally broken,
No, it just won't get very good deliverability, because everything it talks to is now fundamentally broken.
DKIM shouldn't exist. It was a bad idea from day one.
It adds very little real anti-spam value over SPF, but the worse part is exactly the model you describe. DKIM was a largely undiscussed, back-door change to the attributability and repudiability of email, and at the same time the two-tiered model it created is far, far less effective or usable than just end-to-end signing messages at the MUA.
DKIM isn't an antispam measure, it's an anti-impersonation measure. With DKIM, you can't impersonate a domain, which means you can trust that any email you get from an email provider was sent in accordance with that provider's security policy. In most cases, that policy is "one user owns one localpart and they can only send from it if they have their password". In cases where it's not, this is intentional and known by their users.
If you as a user can't trust your email server, you've already lost, no matter if something is authorized by an outbound email or a click on an inbound link. If your mail server is evil or hacked, it can steal your OTP token or activation link just as easily as it can send an email in your name.
Yes, end to end authentication is definitely better, but this isn't what people are discussing here. With enforced DKIM, "send me an email" has a nearly identical security profile to "I've emailed you a link, click on it". Both are inferior to end-to-end crypto.
Calling this "paying to unlock ports" is disingenuous. I'm also a T-2 customer and have run into this before. They block ports on dynamic IPs, but if you pay +2€/mo for static, this is unlocked. This seems reasonable. If you're not paying for static IPv4, you're paying for "internet access", whether that's a rarely chaning dynamic IPv4, a constantly changing IPv4 or full CGNAT.
Would you also say your mobile phone operator is violating net neutrality by putting you behind CGNAT that you can't forward arbitrary ports through? You can pay a bunch of money to get a private APN and get public IPv4 addresses. Would you call that an unblock fee?
I don't know about that law, but GP's point was that you don't get a public IP anyway, firewall or not. And with this NAT in place, you can't ask them to forward specific ports to your equipment.
In France, CG-NAT is getting widespread even for fixed, FTTH links. I'm typing this connected to SFR, which provides a static IPv6 /56, but IPv4 is behind CG-NAT. I can't host anything on IPv4. I think there's an option to get a fixed, internet routable address, but not on the "discount" plan I'm on. I hear you maybe can ask support to get you out of CG-NAT, but that doesn't seem very reliable.
Free (local ISP), by default, doesn't give a static IP for fiber, but you can ask for one for free through your online account page (you just need to tick a box).
> They block ports on dynamic IPs, but if you pay +2€/mo for static, this is unlocked. This seems reasonable.
Why does that seem reasonable to you? Why should dynamic IPs not be able to receive incoming connections? It costs them nothing to let those packets through.
> disingenuous
Bad.
> Would you also say your mobile phone operator is violating net neutrality by putting you behind CGNAT that you can't forward arbitrary ports through?
CGNAT is pretty awful, but at least there's a reason for connections to fail.
But sure, if I had control I would mandate that CGNAT lets you forward ports. Maybe you don't always control the external port, but there shouldn't be any other compromises.
> You can pay a bunch of money to get a private APN and get public IPv4 addresses. Would you call that an unblock fee?
That's a workaround to get a different connection, not an unblock, so no.
Firstly, dynamic IPs are quickly reused, so if one customer get an IP onto a bunch of firewall blocklists because they were operating services that got exploited (like an open relay for spam, email backscatter generator, dns that was used for amplification, smb that hosted on-click executable windows malware...), this means some random unrelatimg customer will now have problems with their internet connection. After a while, you could poison a large chunk of the pool, then they have to not just deal with you, but also a bunch of other angry customers as well as beg all the firewall vendors to unblock those IPs.
If you get static, you keep that IP for a while. You suffer the consequences of your bad setup, you have to deal with FW vendors and after you leave, the IP will be offline for long enough that it will probably "cool off".
And secondly, while I don't like it, we need to keep in mind net neutrality was not written for selfhosters. It was written so an ISP can't zero-rate their own streaming service, or block their competitors. It was about internet access, not internet participation. The ownerwhelmimg majority of people are not and don't care to be "on" the internet, they want to "access" things that are on the internet. That's why NAT is still everywhere.
Define quickly? My modem stays attached on the same IP for months at a time.
> so if one customer get an IP onto a bunch of firewall blocklists
That can happen anyway! Most of those are based on outgoing connections!
> a bunch of other angry customers as well as beg all the firewall vendors to unblock those IPs
Does this happen today on the huge number of ISPs that let you open ports on a dynamic IP? I'm not aware of it.
> we need to keep in mind net neutrality was not written for selfhosters
Well I'm not really focused on the idea of net neutrality, just whether it's reasonable to make customers unconnectable, and I say it's not reasonable.
I'd agree if you picked Google Docs or something like that, but Gmail? Chrome?? Come on! Edge is just Chrome with extra features, plenty of people use Bing without even noticing and many even non-techy people are fine with DuckDuckGo, good free email providers are everywhere (yahoo, hotmail, proton...).
From https://cabforum.org/
> Welcome to the CA/Browser Forum > > The Certification Authority Browser Forum (CA/Browser Forum) is a voluntary gathering of Certificate Issuers and suppliers of Internet browser software and other applications that use certificates (Certificate Consumers).
From https://letsencrypt.org/docs/faq/
> Does Let’s Encrypt issue certificates for anything other than SSL/TLS for websites? > > Let’s Encrypt certificates are standard Domain Validation certificates, so you can use them for any server that uses a domain name, like web servers, mail servers, FTP servers, and many more.
reply