Sorry for the website performance issues, I've disabled the animation which should help.
Why would I choose this instead? That isn't clear to me yet.
Pusher solves the problem of scaling in a different way, which is by giving you an HTTP endpoint to send events to. It’s better for large broadcast groups (e.g. sending people sports scores) but doesn’t run any compute for you.
Currently most people are paying the $25 flat rate; for this we bump the per-backend memory limit to 2GB and give them a soft limit of 20 concurrent backends, up to 24h duration for those backends, and enable eager image pushing for faster start times. The pricing will become more sophisticated over time as we deal with a wider variety of usage patterns.
The bring-your-own-compute model is a bit clearer because it's simpler for us to model from the side of our own costs. That has a $25/month base fee plus $10 per “server month” of compute connected to the control plane. For bring-your-own-compute there is no cost or limit to the backends that run, since they run on our customer's own hardware.
Accidentally closing a browser tab seems like it would end the session and wipe out my whole Jupyter session, variables and all.
When backends are spawned, there is a period of time that they remain alive without a connection. By default, it's 5 minutes. So if you accidentally close a tab, but open it back within 5 minutes, the kernel is still there.
I've seen a lot of bad engineering go into solving that problem, this seems like a better way.
WebSockets also don’t ensure messages are delivered (once) across network hiccups. And browsers have different limits on how many there can be open at once across tabs for a domain (IIRC). In the end, I think it makes sense for them to be the main transport for a real-time app, but they are just one part of creating a conceptually simple model for real-time communication between client and server.
Does Jamsocket have or use a client socket library?
We don't provide a client socket library, we just expose an HTTPS endpoint that also supports WebSockets. So you can use something like socket.io to provide fallback options, reconnects, etc. Some customers don't even use WebSockets at all and just hit HTTP endpoints.
Sometimes I explain it to Erlang/Elixir world people as “GenServer-as-a-service”.
By contrast, Jamsocket gives you a place to deploy your own WebSocket server, which means you can write your own data layer. This give you the control to do more advanced things like enforce invariants in the data structure.
I think of it as akin to using Firebase vs. using Postgres -- there is a place for both, and Liveblocks is a solid product, it just depends how much control you want vs. how much you want to lean on a managed service.
With the production plan we increase the memory limit of the backends to 2GB, up the number of concurrent backends to 20 (it's a soft limit), up the limit on backend runtime to 24h, and have images eagerly pushed rather than lazily pulled for faster start times. If you're open to chatting about your use case, feel free to reach out to the email on my HN profile.
We tested on everything we could get our hands on but didn't have any older macbooks that worked. I'll see what I can do.
Edit: I disabled the animation, I hope that helps.
Might be because I'm running Firefox?
We are working on getting it subsecond for most images, but we're not there yet.
A typical FaaS is stateless between invocations. We have an explicit "spawn" step because each spawn produces a new server process with its own state and own DNS hostname.