This seems sort of okay if writes are self-delimiting and never corrupt, and synchronization can always be recovered at a fragment boundary.
I suppose it’s neat that one can write JSONL and get actual JSONL in the blobs. But this seems quite brittle if multiple writers write to one journal and one malfunctions (aside from possibly failing to write a delimiter, there’s no way to tell who wrote a record, and using only a single writer per journal seems to defeat the purpose). And getting, say, Parquet output doesn’t seem like it will happen in any sensible way.
> But this seems quite brittle if multiple writers write to one journal and one malfunctions (aside from possibly failing to write a delimiter, there’s no way to tell who wrote a record, and using only a single writer per journal seems to defeat the purpose).
Yes, writers are responsible for only ever writing complete delimited blocks of messages, in whatever framing the application wants to use.
Gazette promises to provide a consistent total order over a bunch of raced writes, and to roll back broken writes (partial content and then a connection reset, for example), and checksum, and a host of other things. There's also a low-level "registers" concept which can be used to cooperatively fence a capability to write to a journal, off from other writers.
But garbage in => garbage out, and if an application correctly writes bad data, then you'll have bad data in your journal. This is no different from any other file format under the sun.
> there’s no way to tell who wrote a record
To address this comment specifically: while brokers are byte-oriented, applications and consumers are typically message oriented, and the responsibility for carrying metadata like "who wrote this message?" shifts to the application's chosen data representation instead of being a core broker concern.
Gazette has a consumer framework that layers atop the broker, and it uses UUIDs which carry producer and sequencing metadata in order to provide exactly-once message semantics atop an at-least-once byte stream: https://gazette.readthedocs.io/en/latest/architecture-exactl...
Hi!
> if an application correctly writes bad data, then you'll have bad data in your journal. This is no different from any other file format under the sun.
In a journal that delimits itself, a bad write corrupts only that write (and anything depending on it) — it doesn’t make the next message unreadable. I’m not sure how I feel about this.
I maintain a journal-ish thing for internal use, and it’s old and crufty and has all manner of design decisions that, in retrospect, are wrong. But it does strictly separate writes from different sources, and each message has a well defined length.
Also, mine supports compressed files as its source of truth, which is critical for my use case. It looks like Gazette has a way to post process data before it turns into a final fragment — nifty. I wonder whether anyone has rigged it up to produce compressed Parquet files.
The choice of message framing is left to the writers/consumers, so there's also nothing preventing you from using a message framing that you like better. Similarly, there's nothing preventing you from adding metadata that identifies the writer. Having this flexibility can be seen as either a benefit or a pain. If you see it as a pain and want something that's more high-level but less flexible, then you can check out Estuary Flow, which builds on Gazette journals to provide higher-level "Collections" that support many more features.
Basically pipe is just a collection of WITH statements with some template processing.
It's not heavily used yet, but Rust has a bunch of fairly high visibility efforts. Situation sort of feels similar with http3, where the problem is figuring out what to pick. https://github.com/tokio-rs/tokio-uring https://github.com/bytedance/monoio https://github.com/DataDog/glommio
Alas libuv (powering Node.js) shipped io_uring but disabled it latter. Seems to have significantly worn out the original author on the topic to boot. https://github.com/libuv/libuv/pull/4421#issuecomment-222586...
Why do I need io_uring? Because it sounds awful and unhackerly to suffer living in a much lesser worse world.
Any plans to support websocket?
https://gazette.readthedocs.io/en/latest/brokers-tutorial-in...
I'm working on a streaming audio thing and keeping latency low is a priority. I actually think I'll try Gazette, I just saw it now and it was one of those moments where it's like wait I go to Hacker News to waste time but this is quite exactly what I've been wanting in so many ways.
I'll use it for Ogg/Opus media streams, transcription results, chat events, LLM inferences...
I really like the byte-indexed append-only blob paradigm backed by object storage. It feels kind of like Unix as a distributed streaming system.
Other streaming data gadgets like Kafka always feel a bit uncomfortable and annoying to me with their idiosyncratic record formats and topic hierarchies and whatnot... I always wanted something more low level and obvious...
This has happened so many times for me that I don't consider the time "wasted". I try to make sure I separate the a) "this is interesting personally", and b) "this is interesting professionally" threads and have a bunch of open tabs for a) that I can "read later".
But the items in (b) I read "now" and consider that to be work, not pleasure.