Like everything, it depends on the application.
To do this right, you're using row level locking e.g. SELECT FOR UPDATE/SKIP LOCKED [1], and hopefully you're already using idle_in_transaction_session_timeout to deal with total consumer failures. A properly designed queue in Postgres runs more-or-less in parallel, and supports (really fantastic features like) atomic row locks across all resources needed to serve the queue request.
If you need extremely long consumer timeouts, it's also totally fine to use RLLs in addition to state on the job itself.
[0] - https://www.crunchydata.com/blog/message-queuing-using-nativ... [1] - https://www.2ndquadrant.com/en/blog/what-is-select-skip-lock...
The comment about deployment pipelines was just that some shops explicitly load test with a multiple of prod-like data to find that sort of issue before prod. Doing so for a non-negligible amount of time is important though to catch qualitative shifts in the RDBMS behavior as it approaches a steady or runaway state as a result of any software change.