That’s not my experience. In fact I’d say fat events add coupling because they create an invisible (from the emitter) dependency on the event body, which becomes ossified.
So I’d say the opposite: thin events reduce coupling. Sure, the receiver might call an API and that creates coupling with the API. But receivers are also free to call or not call any other API they want. What if they don’t care about the body of the object?
So I’m on team thin. Every time I’ve been tempted by the other team, I’ve regretted it. It’s also in my experience a lot more difficult to version events than it is to version APIs, so reducing their surface area also solves other problems.
You make a statement in the first sentence, and in the next sentence produce evidence ... that the statement is wrong. And, YMMV.
It is my experience that thin events add coupling. If service B receives an event, and wants to process it ASAP (i.e. near real time) and so calls back over http to Service A for the details, then
a) there is additional latency for a http call. And time variance - Even if the average latency of a http request round-trip is fine, the P99 might be bad.
b) You're asking for occasional "eventual consistency" trouble when A's state lags or has moved on ahead of the event
c) Worst of all: When service A is down or unreachable, Service B is unable to do work: Service B uptime must be <= Service A uptime. You have coupled their reliability, and if Service B is identified as mission-critical, then you have the choice of either making Service A equally critical, or decoupling them e.g. with "fat events".
I don't believe that it's accurate to say "receivers are also free to call or not call..." it's not choosing a flavor of ice-cream, you do the calls that the work at hand _needs_.
If you find that you never need to call back to service A then yes, "thin events" would suit your case better. That has not been my experience.
It's fair that event data format versioning is a lot of work with fat events - nothing is without downside. But in your case, do you have "dependency on the event body" ? All of it? If a thin event is all that you need, then you depend on a couple of ids in the event body, and not the rest. Json reading is very forgiving of added / removed fields, you can ignore the parts of a fat event that you don't care about.
My first sentence was quoting from the article, then I refute the article. Sorry if that wasn’t clear.
Re your point a), yes I agree in this case you’d send the contents in the body, but then I’d tend to call it stream processing rather than event processing - I admit this might seem like splitting hairs, but I do feel that there’s a difference between events and data distribution. And I personally find the data distribution pattern tends to be a lot more specialised.
Re b), it’s just an assumption that the receiver needs the version of data in the message, rather than the latest version. So I don’t think this is a strong argument for fat events.
Re c), again, it’s an assumption that the receiver needs the exact data provided in the event body; but I’ve found that, except in very simple cases, it’s very difficult to efficiently create event bodies that contain everything that all receivers are going to need. Maybe the receiver needs to collate a bunch more data, in which case the problem persists regardless of fat or thin, or maybe it just clears a local cache, in which case the problem is deferred until the data is needed and you probably have other things to worry about then anyway.
> I don't believe that it's accurate to say "receivers are also free to call or not call..." it's not choosing a flavor of ice-cream, you do the calls that the work at hand _needs_.
Sure, and the calls you make depend on the context, and if there is enough data in the event body to avoid making any calls at all. And I’m saying that in my experience that’s not generally the case. What I’ve seen is that the sender composes some event body and sends it, and the receivers end up needing to call APIs anyway.
In which case, the sender may as well have not gone to the trouble, hence my preference for thin events.
> But in your case, do you have "dependency on the event body" ? All of it?
From a maintenance perspective, the sender doesn’t know what the receivers depend on, so even if all your receivers only depend on the IDs, there is no way to find out. Because of this, it’s really easy to add fields to an event message, but really dangerous to remove them, because you can’t easily tell what receivers depend on the thing you’re removing. This is why I said that fat events create more coupling than thin events.
Of course as with most things there are always exceptions. Maybe I should have said, “I’m on team thin by default. But of course some use cases require fat messages, in which case proceed with great care”.
If you allow A's state to lag behind it's own events, then how are you ever going to create a sane system? Surely A either has to be ahead or at the state that caused the event to emit, or events are pointless.
To be noted that this is the default if B is recovering after an outage.
Personally, I consider events to be insane. "We create an immutable database so that the state of the system is always recoverable." Okay, cool, very functional programming of you. "But then to actually work with the event from the immutable database, you have to query a stateful service." ??? What? And even fat events only go so far to get you out of that. So with a stream of n events, you don't have n states that the application can be in, but n times the product of all possible states of every other service that you query. How does this help?!
An example: I'm using thin events in a master data application integration scenario to send a 'sync this record' type of command message into a queue. The message body does not have the record details, only the basic information to uniquely identify the record. It also doesn't identify the type of change except for a flag to identify deletes. The 'sync' message is generalized to work for all entities and systems, so routing, logging, and other functions preceding the mapping and target operation have no coupling to any system or entity and can expect a fixed message format that will probably never change. Thus versioning isn't a concern.
Choosing team 'thin event' does result in an extra read of the target system, but that is a feature for this scenario and what I want to enforce. I can't assume a target system is in any particular state, and the operation to be performed will be determined from the target system at whatever point in time a message is processed, which could be more than once. If the message ended up in a dead letter queue, it can be reprocessed later without issue. If one production system's data is cloned down to a lower environment, the integrations continue to work even if the source and target environment data is mismatched. No state is stored or depended upon from either system and the design is idempotent (ignoring a target system's business rules that may constrain valid operations over time).
In contrast, other scenarios may benefit from or require a fat event. I've never used event sourcing, but as others mention, if current state can be built from all previous events 'rolled forward' or 'replayed', then each event must be a stand-alone immutable record with all information - thin events cannot be used. Or, if a scenario requires high performance we might need to use a fat event to eliminate the extra read, and then compensate for the other consequences that arise.
I think fat vs thin is more about how much other services the event have to travel, because thin event would multiply reads by a fair factor, with the tradeoff being the performance hit for the queue system to store and ship large events
But this is not true for events. If you change the body such that you now need to maintain two versions of an event, then you have to publish both events simultaneously, which means double the server side effort, storage etc for each event version. It’s pretty inefficient, and painful. You can work out who subscribes to the old event but there is still a big efficiency hit.
You might be right about many reads per event in a simplistic way; if you have a lot of clients then it could be expensive if you don’t have a server side cache. But there would typically be a lot of temporality in such a system so it seems like an easy problem to solve for most use cases; you don’t have to cache for long, but caches are of course tricky if your use case is not very simple. That said, if there is already a HTTP connection open then the additional latency and bandwidth hit cause by this events are going to be minimal in most cases, and probably drowned out entirely if you need to push multiple versions.
As I said in another thread, I should have said that thin is my default. There are cases when fat makes more sense, but normally I’d start with thin and see if I need to flesh it out. Whenever I’ve started fat I’ve ended up reverting.
When I've seen this fat event pattern it's been because different services' responsibilities were not fully separated. And that's tight coupling. Fat events imply tight coupling.
The "thin" pattern described in the article goes like this:
1) service FOO gets an event
2) FOO then has to query BAR (and maybe BAZ and QUUX) to determine the overall state of everything to determine what to do next
And #2 means all of that is kind of "thin" is tightly coupled, too.
I've also personally seen thin events that are not the article's thin strawman.
I sometimes wonder if people understand coupling or design.
When the "state" is large, or changes often, obviously you can't send full state every time - that would be too much for end-nodes to process on every event. Both cpu - deserialization, and bandwidth. Delta is the answer.
Delta though is hard, since there always is an inherent race between getting the first full snapshot, and subscribing to updates.
On the other hand doing delta is hard. Therefore, for simple small updated not-often things, fat events carrying all state might be okay.
There is a linear tradeoff on the "data delivery" component:
- worse latency saves cpu and bandwidth (think: batching updates)
- better latency burns more cpu and bandwidth
Finally, the receiver system always requires some domain specific API. In some cases passing delta to application is fine, in some cases passing a full object is better. For example, sometimes you can save a re-draw by just updating some value, in other cases the receiver will need to redraw everything so changing the full object is totally fine.
I would like to see a pub/sub messaging system that solves these issues. That you can "publish" and object, select latency goal, and "subscribe" to the event on the receiver and allow the system to choose the correct delivery method. For example, the system might choose pull vs push, or appropriate delta algorithm. As a programmer, I really just want to get access to the "synchronized" object on multiple systems.
- Entire Object.
You send the entire state of the entire object that changed. Irrelevant fields and all.
This makes business logic and migrations easier in dependent services. You can easily roll back to earlier points in time without diffing objects to determine what state changed. You don't have to replay an entire history of events to repopulate caches and databases. You can even send "synthetic" events to reset the state of everything that is listening from a central point of control.
I've dealt with all three types of system, and this is by far the easiest one to work with.
Since the deltas include a version identifier for what they should be applied on top of, then you should always be able to safely start by requesting the deltas, then ask for the object. Buffer the deltas till your full copy is received, then discard deltas for previous versions until the stream applies to yours, applying them thereafter to keep it up to date.
Is that really different from a fat event?
for example:
if one receiver wants to know if you have read a book, then there is no reason to make a call to the service.
but if a service wants to know the last book you read, and doesn't trust the events to be in order, then it would make sense to just call the service.
It would make more sense to me if the events had an increasing sequence number, version number or accurate timestamp, so that if I record that "'sithlord' last read 'The Godfather' at event '123456'" I can record that, and ignore any event related to "sithlord last read" with event < 123456.
This is not a new problem, there are existing solutions to it.
Hey just remember: both is always an option if you're consumers disagree. Thin stream from the consumers who don't trust the fat data, fat stream for the event log and other consumers that prefer it.
Thin events resulted in DDoS of our service a few times because handlers would call our APIs too frequently to retrieve object state (which was partially mitigated by having separate machines serve incoming traffic and process events).
(A trick we used which worked for both fat and thin events was to add versioning to objects to avoid unnecessary processing).
We also used delta events as well but they had same issues as thin events because handlers usually have to retrieve full object state anyway to have meaningful processing (not always, depends on business logic and the architecture).
There are so many ways to shoot yourself in the foot with all three approaches and I still hesitate a lot when choosing what kind of events to use for the next project.
If the former, there is inherently tight coupling between sender and receiver, and the sender should send all necessary context to simplify the system design.
If the latter, then we talking about a decoupled system, where the sender cannot make assumptions about what info the receiver does or doesn't need to take further action. A thin event is called for to keep the contract simple.
One of my frustrations with the event-driven trend is that people don't always seem to think through what they're designing. It's easy to end up with a much more complex system than a transactional architecture.
Generally, I favor modeling as much of my system as possible as pipelines, and use pub-subs sparingly, as places where you have fan out to parallel pipelines.
Raw events are like GOTOs. They are extremely powerful, but also very difficult to reason about.
It is easier to send an event 'user account changed' than to analyze in detail what exactly changed, which also allows you to decouple the event logic from everything else.
Of course not every system benefits from such solutions, but sometimes simplicity wins.
Then the payload is guaranteed to be small but still able to handle complex operations.
I would also push back that it has more moving parts. You'll often need to pull information and getting pushed information as well can duplicate code on both the service side and client. In practice thin events are easier to get right, despite the extra API requests.
I also think the cases where there's some kind of outage but it's an outage such that the information in the event is enough is fairly rare. I would guess it's more rare than outages that also disable event triggering anyhow.
Nothing is worse than an event driven system polluted with action commands.
PurchaseRequested { userId = ... }
PaymentRequestSentToStripe { userId = ... }
It really helps with debugging, especially if the user gets into some failed state (and the events tell you that they accidentally requested twice)Whether you're mixing or not is hard to tell if you're fudging the wording.
Whats the point of the payment request sent to stripe event? Does it trigger a stripe API request or is it emitted after an API request to stripe is made?