Are people using it for reliability though? Are they running redis in a mode where it persists a journal? If not, then if redis crashes for any reason, you're in the same situation.
And, like, Mastodon apparently uses a queue to do things like send new user registration emails. Why not just send the email from the new user request handler? Then if there's an error, you can tell the user in the response instead of saying "okay you should get an email" and then having it go into the ether. I was under the impression this had something to do with not wanting to tie up the HTTP worker because you want it to quickly get back to doing HTTP requests, but if it can concurrently process requests, there's no issue.
Similarly they have an ingest queue for other federated servers sending them updates. But if things are fast, why wouldn't they just process the updates in the HTTP handler? You don't need a reliable queue because if e.g. you crash, the other side will not get their HTTP response, and they'll know to retry.