> What benefit is SSE providing here? Let the client decide when a session starts/ends by generating IDs and let the server maintain that session internally.
The response is generated asynchronously, instead of within the HTTP request/response cycle, and sent over SSE later. But emulating WS with HTTP requests+SSE seems very iffy, indeed.
Well with SSE the server and client are both holding a HTTP connection open for over a relatively long period of time. If the server is written with a language that supports async paradigms, then a http request that needs async IO will use about the same amount of resource anyways. And when the response if finished, that connection is closed and resources are freed. Whereas SSE will keep them for much longer.
Yes, and the client may do multiple requests, and if all take long to be processed you may end up with a lot of open connections at the same time (at least on http1), so there is a point to fast HTTP requests+SSE, instead of slow requests (and no SSE). Granted, if the server is HTTP2 the requests can share the same connection, but then it'd be similar to just using WS for this usage. Also, this allows to queue the work, and processed it either sequentially or concurrently.
By async I meant a process that may take longer than you are willing to do within the request/response cycle, not necessarily async IO.
not if you need bidirectional communication, for example a ping-pong of request/response. That is solved with WS, but hard to do with SSE+requests. The client requests may not even hit the same SSE server depending on your setup. There are workarounds obviously, but it complicates things.
In reality you would build your application server on top of the HTTP/2 server, so you'd not have to deal with multiplexing, the server will hide that from you, so it's the same as an HTTP/1 server (ex: you pass some callback that gets called to handle the request). If you implement HTTP/2 from scratch, multiplexing is not even the most complex part... It's rather the sum of all parts: HPACK, flow-control, stream state, frames, settings, the large amount of validations, and so on.
This may be true with some stacks, but my answer has to be understood in the context of Ruby where the only real source of parallelism is `fork(2)`, hence the natural way to write server is an `accept` loop, which fits HTTP/1 very well, but not HTTP/2.
There is a gem that implements lightweight threads[0], and there is an HTTP/2 server that seems to abstract things out[1]. Your point probably still holds in the context of ruby + async + http/2; but then it's not http/2 fault, but rather ruby for not having a better concurrency story, like say golang.
The Ruby concurrency story is fine, the problem is parallelism. I have a whole list of posts about all that.
> it's not http/2 fault, but rather ruby
My post is to be read primarily in the context of Ruby, as the intro clearly explains it. I'm not the one who posted it here, it really isn't intended for the HN audience. I would never submit my posts here.
Many of my points are more general than just Ruby-centric, but yes, if your stack of choice have very good support for HTTP/2 I'm not saying not to use it in your DC.
My point is that as a Ruby user, there isn't much reason to lament over the lack of HTTP/2 support in Puma or some other servers.
A h2 proxy usually wouldn't proxy through the http2 connection, it would instead accept h2, load-balance each request to a backend over a h2 (or h1) connection.
The difference is that you have a h2 connection to the proxy, but everything past that point is up to the proxies routing. End-to-end h2 would be more like a websocket (which runs over HTTP CONNECT) where the proxy is just proxying a socket (often with TLS unwrapping).
> A h2 proxy usually wouldn't proxy through the http2 connection, it would instead accept h2, load-balance each request to a backend over a h2 (or h1) connection.
Each connection need to keep state of all processed requests (the HPACK dynamic headers table), so all request for a given connection need to be proxied through the same connection. Not sure I got what you meant, though.
Apart from that, I think the second sentence of my comment makes clear there is no smuggling as long as the connection before/past proxy is http2, and it's not downgraded to http1. That's all that I meant.
Something not mentioned: web-browsers limit the number of connections per domain to 6. With +http/2 they will use a single connection for multiple concurrent requests.
I'll second this. There are a lot of good managers that care more about the product than some team metrics. Same for coworkers that care about improving. Don't let one bad manager define how you'll do things in your next job.
Look-arounds can be implemented in quadratic time for unbounded expressions (i.e: containing +, *), and linear time for bounded expressions quite easily. And I suspect they can be implemented in (super)linear time in general by matching them in parallel to the NFA.
The response is generated asynchronously, instead of within the HTTP request/response cycle, and sent over SSE later. But emulating WS with HTTP requests+SSE seems very iffy, indeed.