Fundamentally, Postgres is storing rows in 8kb pages of tuples, where each tuple has transaction metadata like `xmin` and `xmax` which are used in MVCC. This means a transactional update is usually only updating one block of memory + any TOASTable columns. Meanwhile, ClickHouse is storing data by columns, so a transactional update on multiple fields of a row
necessarily involves updating multiple memory pages.
Personally, I architect systems to use Postgres for the write-heavy parts of a workload, and ClickHouse for the write-once parts (time series, analytics, logs, etc). ClickHouse is also the best tool I've ever found for compressing enormous datasets, and is very useful even as a simple data warehouse.