That is really besides the point. Logging and tracing have always been fundamentally event sourcing, but that never forced anyone ever at all to onboard onto freaking Kafka of all event streaming/messaging platforms.
This blend of suggestion sounds an awful lot like resume driven development instead of actually putting together a logging service.
The first step in building a reliable logging system is setting up a high write throughout highly available FIFOish durable storage. Once you have that everything else gets a lot easier.
* Once the log is committed to the durable queue that's it the application can move on secure the log isn't going to get lost.
* Multiple consumer groups can process the logs for different purposes, the usuals are one group for persisting the logs to a searchable index and one group for real time altering.
* Everything downstream from Kafka can be far less reliable because it's just a queue backup.
* You can fake more throughout then you actually have in your downstream processors because it just manifests as a lagging offset.
You sound like you've been using an entirely different project named Kafka, because the Kafka everyone uses is renowned among message brokers for its complexity and operational overhead.