We're writing our consumers in Go using some impressively well written AMQP libraries (https://github.com/streadway/amqp) and some custom framework code. The framework code takes care of retries and acking, so the consumers are very simple (in: an envelope, out: fail/done/retry-later). Each worker runs as its own binary. I'm currently adding standardized variable exports for monitoring, and ephemeral queue-based reply capability.
On the PHP side, I found none of the PHP AMQP libraries to be worth using. They all have compilation problems or bugs or seem to be unmaintained. Instead, I'm using RabbitMQ's STOMP plugin w/default login and then wrote a simple TCP client in PHP using persistent connections. The client supports timeouts and multiple backends.
So far, this is working really well. Anybody doing their own RPC implementations using HTTP should take a look at using something like RabbitMQ. It solves a host of real-world problems and introduces much flexibility into your architecture.
I take care of this one https://github.com/videlalvaro/php-amqplib and it's very well maintained and used by many companies in production.
Also is, it's a pure PHP library so there's no need to compile it. It's been installed +72000 times already https://packagist.org/packages/videlalvaro/php-amqplib
For the sole purposes of producing messages, STOMP and a few dozen lines of PHP seems more attractive to me than using the full library (plain text protocol, easy persistent connections, simple retry behaviors, and no surprises). I'll use your library when we add PHP consumers, though, because consumption is harder to get right.
Thanks.
Totally agree with your last paragraph, introducing a message queue has solved a lot of problems for us.
The first is a "message-ttl". This tells RabbitMQ to discard messages after a specified number of milliseconds. The second is a "dead letter queue". Messages that are discarded from a queue can be routed to a dead letter queue automatically.
When we have a job that we wish to "retry later", the framework re-queues the message in a secondary queue with a name derived from the original name. For example, if the original queue was "prod-emailer", the derived queue name might be "prod-emailer-1m" indicating that the contents of this queue are messages originally bound for prod-emailer but were delayed by 1 minute.
This delayed queue is configured with a x-dead-letter-exchange of the original exchange, x-dead-letter-routing-key of the original routing key, and x-message-ttl of 60,000. With this configuration, RabbitMQ handles the timeout automatically. When the message expires from the -1m queue, RabbitMQ sends it back to the exchange and it gets routed to the intended queue by the pre-existing bindings.
The framework expects all messages to be in an "envelope" of JSON which lets us annotate the jobs. When we mark a job for retry, we also increment an "attempt-count" attribute in the JSON. The workers can them implement their own "retry N times" policies.
I haven't thought about how this would work if we were using topic exchanges. We are only using direct at the moment.
From the repo readme: "Promiscuous is a publisher-subscriber framework for easily replicating data across your Ruby applications."