The only mosh CVE [1] was in the terminal emulator (a DoS that could only be triggered by a local user), not in the protocol. There have been no vulnerabilities in mosh's UDP protocol.
QUIC datagrams not having a stream ID was a compromise, which is why the H3-DGRAM draft exists to add them. Any other protocol can use cite and use H3-DGRAM even if it itself is not using HTTP/3.
what possible worse position could there have been? i dont understand why anyone would protest basic common sense of having a multiplexing protocol do multiplexing.
making it a hybrid multiplexing/not-multiplexong protocol, that can easily wrap & encapsulate streams but which is dead ass useless & requires subprotocol negotiation & tracking to deal with anything else seems like unbelievable busted jank. i literally cant imagine a worse way to do this. how was this compromise? who does this satisfy? i would have told any extremist protesting common semse to take a hike, would have started my own draft, would have died on this hill.
this is murderously bad incidental complexity. it's uncontainedly bad. what if someone doesnt opt to use the h3-dgram variant? then we are up shits creek: our protocols conflict & there's no way to resolve this mess. how do we differentiate channels (my term for "streams" of unordered datagrams) if everyone has different ways of defining channels? we invite in infinite forms of incompatibility. maybe we need to revamp the h3 encapsulation & spinkle a couple magic bytes as preamble, to try to mark our turf, declare our subprotocol.
this is the definition of insanity. this is a stupid awful mess. irreconcilable & infinitely confusing, forever & ever. disastrously wrong. how possibly could datagrams be done worse? what was "compromised"?
i have had such high hopes QUIC would be the transport protocol to encapsulate everything, to make a new dawn of connectivity possible. this is weap-into-my-keyboard stupid/bad news, this is impossibly broken, this is disaster: DATAFRAME is broken by design, due to "compromise", and nothing non-stream-based can work together ever atop QUIC (except in some special lucky cases) because the basic, most core, simple, sensible-est of guiding principles of QUIC got thrown out the door to make DATAFRAME, and it's ruin & damnation for it. DATAFRAME could not have a more elementary mistake if it tried. sctp notably managed to avoid shooting themselves in a vital area like this. abandon this DATAFRAME effort. it jeopardizes the overall QUIC effort itself that it is so awful.
futex is a Fast Userspace muTEX. It's the syscall to help implement a mutex when there are two or more threads waiting on the lock to let other processes/threads schedule and do useful work during the wait.
We have built a VPN over QUIC, and the core code is open source already [0].
We're working on standardizing "IP Proxying" over QUIC as part of the MASQUE working group at IETF. So far, we've adopted a requirements document [1] and have started work on an implementation [2].
I can see a number of jurisdictions around the world either blocking and/or profiling this sort of traffic. Is any form of "chaffing" / plausible deniability built into the protocol?
Right now we're focusing on building a functional core protocol and making sure it's sufficiently extensible. It should be possible to build chaffing as an add-on extension down the line.
In #2, why is the path hardcoded to /? One of the things I've considered somewhat important in my similar work (on Orchid, using HTTPS+WebRTC) is the ability to "embed" a VPN as a sub-resource on some existing website.
I’ve seen an IDS decide to classify all traffic, including management traffic as hostile. The result was an outage for one of the larger web shops in germany.
An IDS basing its fundamental action (detection) partly on ML can definitely be a good, valuable idea. An IPS basing its fundamental action (blocking traffic) partly on ML is the problem.
Well, it is said that the only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards.
Blanket denying all traffic is a good first step to ensuring that the system is really, really secure :P
It’s doesn’t really make any difference does it? The path isn’t used to indicate an IP packet flow, the “CONNECT-IP” method is what indicates the you want to send an IP packet flow to the server.
You could use the path to indicate different VPN endpoints, but ultimately the path isn’t needed at all.
Additionally all of this is gonna be inside a TLS session, so no external viewer will ever see any of the headers, including the path.
TL;DR the RFC doesn’t make this VPN endpoint a traditional http resource at all. It a special new HTTP method (like GET or POST) that indicates the client wants the server to treat the following data as an IP stream.
FWIW, I do get that it is its own method and I understand the intent of the specification (and thereby why it "wouldn't matter" in that worldview), but it feels weirdly non-HTTP for this functionality to not be treated as a resource... but like, I guess "fair enough" in that (looking at it again) the existing old-school proxy CONNECT method also has this flaw :/. I just feel like people should be able to easily "mount" functionality onto their website into a folder, and by and large most HTTP methods support this, with the ability to then take that sub folder and internal-proxy it to some separate backend origin server (as like, I would want to take the flows for anything under /secret/folder/--whether GET or PUT or PROPPATCH or CONNECT-IP--and have my front-end http terminator be able to transparently proxy them, without having to understand the semantics of the method; it could very well be that I am just living in a dream of trying way too hard to be fully-regular in a world dominated by protocols that prefer to define arbitrary per-feature semantics ;P).
I guess I am also very curious why you aren't defining this over WebTransport (which would allow it to be usable from websites--verified by systems like IPFS--to build e2e-encrypted raw socket-like applications, as I imagine CONNECT-IP won't be something web pages will be able to use). (For anyone who reads this and finds this thought process "cool", talk to me about Orchid some time, where I do e2e multi-hop VPN over WebRTC ;P.)
The Great Firewall does probing for ages now. It pauses the connection and sends its own request, checking what kind of protocol the server returns. Nothing stops these firewalls from trying ‘connect-ip’. The path could be used as a secret ‘key’ to thwart too-curious firewalls (though the protocol probably won't do much against DPI anyway).
It would then be weirdly important that a mis-authenticated CONNECT-IP then look the same as if CONNECT-IP were not implemented, as opposed to returning an authorization error of some kind (which seems weirder than hiding the method via the path).
True. But the path method has issues too, like, how would you implement that with HTTP 1.1? The "path" field is actually part of the target, so you would have to send the request like `CONNECT-IP https://remote-server/mount-point` which seems backwards. (EDIT: and also you still need to make sure you don't give the wrong kind of error if the path is wrong, like a 404 instead of a 501 not implemented or whatever)
As a possible solution for the discovery by authorization error problem, maybe a convention could be established where an authorization error is returned by default if CONNECT-* isn't configured.
Huh. It is possible that I simply didn't understand the spec, as I was (and am...) pretty sure that CONNECT-IP (at least at the HTTP semantics level) just takes a path (and only /)--like GET--without an associated "extra" authority, as it is creating an IP tunnel: there is no remote host (as there would be with an old-school CONNECT); like, what is the "target" here? I am just talking to the server and asking it to give me an IP bridge, right? What does "remote-server" represent to an IP tunnel?
I guess authentication via https before ‘connect-ip’ would work. Or, authenticating with headers in the same first ‘connect-ip’ request, if the server responds with ‘invalid method’ when not authenticated.
By the way, probing at connection time, that I mentioned in my original comment, isn't actually necessary. The GFW will just scan known popular-ish hosts, trying ‘connect-ip’ and banning everything that works as a proxy. (Connection-time probing would just make it easier to discover the hosts, such that collecting the IPs and stats separately is not needed.)
Those are the fast moving and the new parts of the internet. I bet that in a few years IPv6 deployment will slow down as the remaining systems become older, less maintained or even opinionated (like me). For many years to come some kind of IPv4 connectivity will remain a requirement to access all of the internet/web.
I'm not switching my private network. This has nothing to do with wider adoption, nor do I have issues with IPv6 as a protocol.
What's blocking me is router firmware. It can do IPv6, but only as an afterthought. Sadly, no level of adoption is going to fix that, until I buy a new router.
Time for a new router, then, and by "new" I mean any produced in the last ten years. I have several old routers in a junk drawer that only do 10/100 and even they support it.
The specific detail that you've noticed in the Go implementation has to do with RFC 7540, Section 9.2.2 (https://tools.ietf.org/html/rfc7540#section-9.2.2) which requires TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 for TLS 1.2 only. Deployments of the future TLS 1.3 are free to not support this cipher, if I am reading the RFC correctly.
That is to say, you're correct that server configured for a 100% on SSLLabs will not support HTTP/2, but I agree with davidben that SSLLabs is incorrect here for incetivising AES-256, particularly in CBC mode, for the 100% score.
I ran a nearly identical screen theme for a long time, before switching to byobu. Nice to see this broken down, screen's format strings are quite dense.
[1] https://nvd.nist.gov/vuln/detail/CVE-2012-2385