But, going the other way, I worked for over a decade on Goldman Sach's SecDB system. It's a quirky steampunk alternative future that branched from our light cone around 1995. There's a globally distributed eventually consistent NoSQL database tightly integrated with a data-flow gradually-typed scripting language (and a very 1990s feel 16 color IDE). I'm sure in the late 1990s/early 2000s (before globally distributed NoSQL was popular and before gradual/dynamic typing had a resurgence) it was more like discovered alien technology than steampunk alternative future. (Also, with source code being executed from globally distributed immutable database snapshots, deployment is much nicer than anything else I've used to date. After release testing, set a database key to point to the latest snapshot, and you're deployed.)
There's a service that watches the transaction log of your regional replica so that you can make long-poll HTTP requests that return when any change matching your filter is committed. (Edit: usually the HTTP result handler is used to invalidate specific memoized results in the data flow graph, letting lazy re-evaluation re-fetch the database records as needed.)
It makes a lot of sense for a financial risk system, where you end up calculating millions of slight variations on a scenario. The data flow model with aggressive memoization makes this sort of thing much cheaper.
However, I saw plenty of systems written where you'd attempt to write your request to the next key matching some regex (and retry with the next key if it already existed), where your request would contain some parameters and the database key and/or filesystem path where results should be written.
Under-experience with databases easily results in rewriting a database using message queue/bus. Under-experience with message queues/busses easily results in rewriting a message queue/bus using a database.