https://store.steampowered.com/app/486310/Meadow/
We have had a total of 350.000 players over 6 years and the backend out-scales all other multiplayer servers that exist and it's open source:
https://github.com/tinspin/fuse
You don't need HTTP/2 to make SSE work well. Actually the HTTP/2 TCP head-of-line issue and all the workarounds for that probably make it harder to scale without technical debt.
Can you explain what you mean here? What was your peak active user count, what was peak per server instance, and why you think that beats anything else?
That's said, it's very cool. Do you have a development blog for Meadow?
No, no dev log but I'll tell you some things that where incredible during that project:
- I started the fuse project 4 months before I set foot in the Meadow project office (we had like 3 meetings during those 4 months just to touch base on the vision)! This is a VERY good way of making things smooth, you need to give tech/backend/foundation people a head start of atleast 6 months in ANY project.
- We spent ONLY 6 weeks (!!!) implementing the entire games multiplayer features because I was 100% ready for the job after 4 months. Not a single hickup...
- Then for 7 months they finished the game client without me and released without ANY problems (I came back to the office that week and that's when I solved the anti-virus/proxy cacheing/buffering problem!).
I think Meadow is the only MMO in history so far to have ZERO breaking bugs on release (we just had one UTF-8 client bug that we patched after 15 minutes and nobody noticed except the poor person that put a strange character in their name).
Not MIT then. The beauty of MIT is that there is no stuff.
That said if you, like SSE on HTTP/1.1; use 2 sockets per client (breaking the RFC, one for upstream and one for downstream) you are golden but then why use HTTP/2 in the first place?
HTTP/2 creates more problems than solutions and so does HTTP/3 unfortunately until their protocol fossilizes which is the real feature of a protocol, to become stable so everyone can rely on things working.
In that sense HTTP/1.1 is THE protocol of human civilization until the end of times; together with SMTP (the oldest protocol of the bunch) and DNS (which is centralized and should be replaced btw).
My license is messy but if you search for "license" on the main github page you'll eventually find MIT + some ugly modifications I made.
https://github.com/open-wa/wa-automate-nodejs
There should be some sort of support group for those of us trying to monetize (sans donations) our open source projects!
We probably need a new license though because piggybacking on MIT (or any other license) like I try to do is rubbing people the wrong way.
But law and money are my least favourite passtimes, so I'm going to let somebody else do it first unless somebody is willing to force this change by buying a license and asking for a better license text.
Your source-available projects. Nothing wrong with licensing your work that way (in the sense that you can make that choice, not in the sense that I think its a good idea) but please don't muddle the term "open source".
You can't say that and not say more about it, haha. Please expand on this?
Also, I'm a Fastmail customer and appreciate the nimble UI, thanks!
Before that... yeah, the Firefox dev tools were not very helpful for SSE.
It's a beautifully simple & elegant lightweight push events option that works over standard HTTP, the main gotcha for maintaining long-lived connections is that server/clients should implement their own heartbeat to be able to detect & auto reconnect failed connections which was the only reliable way we've found to detect & resolve broken connections.
That sounds like a total nightmare!
Also ease of use doesn’t really convince me. It’s like 5 lines of code with socket.io to have working websockets, without all the downsides of sse.
Server-sent events appears to me to just be chunked transfer encoding [0], with the data structured in a particular way (at least from the perspective of the server) in this reference implementation (tl,dr it's a stream):
https://gist.github.com/jareware/aae9748a1873ef8a91e5#file-s...
[0]: https://en.wikipedia.org/wiki/Chunked_transfer_encoding
Which seems to be what you need to send 'headers' after a chunked response.
You can kludge it with fetch(...) and the body stream
1. <https://developer.mozilla.org/en-US/docs/Web/API/EventSource>
With current day APIs, including streaming response bodies in the fetch API, SSE would probably not have been standardized as a separate browser API.
You have to send "Content-Type: text/event-stream" just to make them work.
And you keep the connection alive by sending "Connection: keep-alive" as well.
I've never had any issues using SSEs.
You can also implement websockets in 5 lines (less, really 1-3 for a basic implementation) without socket.ii. Why are you still using it?
Read my comment below about that.
You simply get stuff like auto-reconnect and graceful failover to long polling for free when using socket.io
> SSE is subject to limitation with regards to the maximum number of open connections. This can be especially painful when opening various tabs as the limit is per browser and set to a very low number (6).
https://ably.com/blog/websockets-vs-sse
SharedWorker could be one way to solve this, but lack of Safari support is a blocker, as usual. https://developer.mozilla.org/en-US/docs/Web/API/SharedWorke...
also, for websockets, there are various libs that handle auto-reconnnects
https://github.com/github/stable-socket
https://github.com/joewalnes/reconnecting-websocket
https://dev.to/jeroendk/how-to-implement-a-random-exponentia...
If you’re still using HTTP/1.1, then yes, this would be a problem.
maybe it was inability to do broadcast to multiple open sse sockets from nodejs.
i should revisit.
https://medium.com/blogging-greymatter-io/server-sent-events...
Well it's a non-problem, if you need more bandwith than one socket in each direction can provide you have much bigger problems than the connection limit; which you can just ignore.
in most cases this is not a concern, but in some cases it is.
First reason was that it was an array of connections you loop through to broadcast some data. We had around 2000 active connections and needed a less than 1000ms latency, with WebSocket, even though we faced connections drops, client received data on time. But in SSE, it took many seconds to reach some clients, since the data was time critical, WebSocket seemed much easier to scale for our purposes. Another issue was that SSE is like an idea you get done with HTTP APIs, so it doesn't have much support around it like WS. Things like rooms, clientIds etc needed to be managed manually, which was also a quite big task by itself. And a few other minor reasons too combined made us switch back to WS.
I think SSE will suit much better for connections where bulk broadcast is less, like in shared docs editing, showing stuff like "1234 users is watching this product" etc. And keep in mind that all this is coming from a mediocre full stack developer with 3 YOE only, so take it with a grain of salt.
I haven't observed any latency or scaling issues with SSE - on the contrary: in my ASP.NET Core projects, running behind IIS (with QUIC enabled), I get better scaling and throughput with SSE compared to raw WebSockets (and still-better when compared to SignalR), though latency is already minimal so I don't think that can be improved upon.
That said, I do prefer using the existing pre-built SignalR libraries (both server-side and client-side: browser and native executables) because the library's design takes away all the drudgery.
Sounds like the implementation you were using was introducing the latency.
One example where i found it to be not the perfect solution was with a web turn-based game.
The SSE was perfect to update gamestate to all clients, but to have great latency from the players point of view whenever the player had to do something, it was via a normal ajax-http call.
Eventually I had to switch to uglier websockets and keep connection open.
Http-keep-alive was that reliable.
With the downsides of HTTP/1.1 being used with SSE, websockets actually made a lot of sense, but in many ways they were a kludge that was only needed until HTTP/2 came along. As you said, communicating back to the server in response to SSE wasn’t great with HTTP/1.1. That’s before mentioning the limited number of TCP connections that a browser will allow for any site, so you couldn’t use SSE on too many tabs without running out of connections altogether, breaking things.
Very interesting ! I honestly didn't know that, or even think about it like that ! #EveryDayYouLearn :)
SSE streams are multiplexed into a HTTP2 stream, so they can suffer from congestion issues caused by unrelated requests.
In contrast, HTTP2 does not support websockets, so each websocket connection always has its own TCP connection. Wasteful, but ensures that no head-of-line blocking can occur.
So it might be that switching from SSE to websockets gave better latency behaviour, even though it had nothing to do with the actual technologies.
Of course, this issue should be solved anyway with HTTP3.
No new connection and no low-level connection (TCP, TLS) handshakes, but the server still has to parse and validate the http headers, route the request, and you'd probably still have to authenticate each request somehow (some auth cookie probably), which actually may start using a non-trivial amount of compute when you have tons of client->server messages per client and tons of clients.
(There's one person in this thread who is just ridiculously opposed to HTTP/2, but... HTTP/2 has serious benefits. It wasn't developed in a vacuum by people who had no idea what they were doing, and it wasn't developed aimlessly or without real world testing. It is used by pretty much all major websites, and they absolutely wouldn't use it if HTTP/1.1 was better... those major websites exist to serve their customers, not to conspiratorially push an agenda of broken technologies that make the customer experience worse.)
The SSE connection limit is a nasty surprise once you run into it, it should have been mentioned.
True but the limit for websockets these days is in the hundreds, as opposed to 6 for regular HTTP requests.
There are some hacks to work around it though.
It's possible to detect that, and fall back to long polling. Send an event immediately after opening a new connection, and see if it arrives at the client within a short timeout. If it doesn't, make your server close the connection after every message sent (connection close will make AV let the response through). The client will reconnect automatically.
Or run:
while(true) alert("antivirus software is worse than malware")My experience, now a bit dated, is that long polling is the only thing that will work 100% of the time.
At NodeBB, we ended up relying on websockets for almost everything, which was a mistake. We were using it for simple call-and-response actions, where a proper RESTful API would've been a better (more scalable, better supported, etc.) solution.
In the end, we migrated a large part of our existing socket.io implementation to use plain REST. SSE sounds like the second part of that solution, so we can ditch socket.io completely if we really wanted to.
Very cool!
Would you please elaborate on the challenges/disadvantages you've encountered in comparison to REST/HTTP?
As it turns out, while almost anyone can fire off a POST request, not many people know how to wire up a socket.io client.
The one thing I wish they supported was a binary event data type (mixed in with text events), effectively being able to send in my case image data as an event. The only way to do it currently is as a Base64 string.
$ ls -l PXL_20210926_231226615.*
-rw-rw-r-- 1 derek derek 8322217 Feb 12 09:20 PXL_20210926_231226615.base64
-rw-rw-r-- 1 derek derek 6296892 Feb 12 09:21 PXL_20210926_231226615.base64.gz
-rw-rw-r-- 1 derek derek 6160600 Oct 3 15:31 PXL_20210926_231226615.jpg
As an aside, Django with Gevent/Gunicorn does SSE well from our experience.
Ultimately what I did was run an SSE request and long polling image request in parallel, but that wasn’t ideal as I had to coordinate that on the backend.
Essentially just new EventSource(), text/event-stream header, and keep conn open. Zero dependencies in browser and nodejs. Needs no separate auth.
[0]: https://www.eclipse.org/paho/index.php?page=clients/js/index...
I made use of that in Lunar (https://lunar.fyi/#sensor) to be able to adjust monitor brightness based on ambient light readings from an external wireless sensor.
At first it felt weird that I have to wait for responses instead of polling with requests myself, but the ESP is not a very powerful chip and making one HTTP request every second would have been too much.
SSE also allows the sensor to compare previous readings and only send data when something changed, which removes some of the complexity with debouncing in the app code.
Personally I think it's a great solution for longer running tasks like "Export your data to CSV" when the client just needs to get an update that it's done and here's the url to download it.
https://github.com/tinspin/rupy/wiki/Comet-Stream
Old page, search for "event-stream"... Comet-stream is a collection of techniques of which SSE is one.
My experience is that SSE goes through anti-viruses better!
Hmm, another commenter says the opposite:
I also had no problems with HAProxy, it worked with websockets without any issues or extra handling.
So many alternatives to Hotwire want to use WebSockets for everything, even for serving HTML from a page transition that's not broadcast to anyone. I share the same sentiment as the author in that WebSockets have real pitfalls and I'd go even further and say unless used tastefully and sparingly they break the whole ethos of the web.
HTTP is a rock solid protocol and super optimized / well known and easy to scale since it's stateless. I hate the idea of going to a site where after it loads, every little component of the page is updated live under my feet. The web is about giving users control. I think the idea of push based updates like showing notifications and other minor updates are great when used in moderation but SSE can do this. I don't like the direction of some frameworks around wanting to broadcast everything and use WebSockets to serve HTML to 1 client.
I hope in the future Hotwire Turbo alternatives seriously consider using HTTP and SSE as an official transport layer.
[1]: https://twitter.com/dhh/status/1346095619597889536?lang=en
Anyone knows the rationale behind this limitation?
No support for compression
No support for HTTP/2 multiplexing
Potential issues with proxies
No protection from Cross-Site Hijacking
---Is that true? The web never cease to amaze.
I don't see why WebSockets should benefit from HTTP. Besides the handshake to setup the bidirectional channel, they're a separate protocol. I'll agree that servers should think twice about using them: they necessitate a lack of statelessness & HTTP has plenty of benefits for most web usecases
Still, this is a good article. SSE looks interesting. I host an online card game openEtG, which is far enough from real time that SSE could potentially be a way to reduce having a connection to every user on the site
1) More complex and binary so you cannot debug them as easily, specially on live and specially if you use HTTPS.
2) The implementations don't parallelize the processing, with Comet-Stream + SSE you just need to find a application server that has concurrency and you are set to scale on the entire machines cores.
3) WebSockets still have more problems with Firewalls.
What I mean by that is client sends request, server responds in up to 2 minutes with result or a try again flag. Either way client resends request and then uses response data if provided.
Comet-stream and SSE will save you alot of bandwidth and CPU!!!
Since IE7 is no longer used we can bury long-polling for good.
What worries me though is the trend of dismissal of newer technologies as being useless or bad and the resistance to change.
SSE runs over HTTP/3 just as well as any other HTTP feature, and WebTransport is built on HTTP/3 to give you much finer grained control of the HTTP/3 streams. If your application doesn't benefit significantly from that control, then you're just adding needless complexity.
They'll try to read the entire stream to completion and will hang forever.
1) Push a large amount of data on the pull (the comet-stream SSE never ending request) response to trigger the middle thing to flush the data.
2) Using SSE instead of just Comet-Stream since they will see the header and realize this is going to be real-time data.
We had 99.6% succes rate on the connection from 350.000 players from all over the world (even satellite connections in the Pacific and modems in Siberia) which is a world record for any service.
You can likely configure your user agent to ignore site-specified fonts.
Upon further inspection, it looks like the actual code on the page is `!==` and `=>` but the font ("Fira Code") seems to be somehow converting those sequences of characters into a single symbol, which is actually still the same number of characters but joined to appear as a single one. I had no idea fonts could do that.
What are the benefits of SSE vs long polling?
The underlying mechanism effectively is the same: A long running HTTP response stream. However long-polling commonly is implemented by "silence" until an event comes in and then performing another request to wait for the next event, whereas SSE sends you multiple events per request.
HAProxy supports RFC 8441 automatically. It's possible to disable it, because support in clients tends to be buggy-ish: https://cbonte.github.io/haproxy-dconv/2.4/configuration.htm...
Generally I can second recommendation of using SSE / long running response streams over WebSockets for the same reasons as the article.
Good performance, easy to use, easy to integrate.
Can you recommend some resources for learning SSE in depth?
The way I solve it is to send "noop" messages at regular intervals so that the socket write will return -1 and then I know something is off and reconnect.
* Start with SSE
* If you need to send binary data, use long polling or WebSockets
* If you need fast bidi streaming, use WebSockets
* If you need backpressure and multiplexing for WebSockets, use RSocket or omnistreams[1] (one of my projects).
* Make sure you account for SSE browser connection limits, preferably by minimizing the number of streams needed, or by using HTTP/2 (mind head-of-line blocking) or splitting your HTTP/1.1 backend across multiple domains and doing round-robin on the frontend.
[0]: https://rsocket.io/
That is wrong. Edit: Actually it seems correct (a javascript problem, not SSE problem) but it's a non-problem if you use a parameter for that data instead and read it on the server.
However you are correct that if you’re not using JavaScript and connecting directly to the SSE endpoint via something else besides a browser client, nothing is preventing anyone from using custom headers.
[1] https://developer.mozilla.org/en-US/docs/Web/API/EventSource...