There will likely be a good OLAP solution (possibly implemented as an extension) in Postgres in the next year or so. There are a few companies are working on it (Hydra, Parade[0], tembo etc.).
this looks like repackaging of datafusion as PG extension?..
ParadeDB integrates industry standards like Arrow, Parquet, DataFusion to offer columnar storage + vectorized processing. Hydra is building on top of Citus Columnar.
You can read about our approach here: https://blog.paradedb.com/pages/introducing_analytics
* Timeouts (only 30s???) unless I used the cli client
* Cancelling rows - Just kill me, so many bugs and FINAL/PREWHERE are massive foot-guns
* Cluster just feels annoying and fragile don't forget "ON CLUSTER" or you'll have a bad time
Again, I feel like we must be doing something wrong but we are paying an arm and a leg for that "privilege".
I'm sure CH shines for insert-only workloads but that doesn't cover all our needs.
Almost all clients (client libraries) allow a configurable timeout. In server settings there is a max query time settings which can be adjusted if necessary: https://clickhouse.com/docs/en/operations/settings/query-com...
> However, using FINAL is sometimes necessary in order to produce accurate results
Welp.
It’s a table design optimised for specific workloads, and the docs and design detail those tradeoffs.
We use it at work for workloads that can tolerate “retreading” over stale data, because it means they can efficiently write to the db without round tripping, or locking and row updates, and without the table growing massive. It works fantastically in our use case.
I don't disagree, I feel like we might be using it wrong. We were trying to replace ES with it but it just doesn't feel like it fits our needed usecase.
For example, I've set it up, along with many more limitations for my public playground https://play.clickhouse.com/, and it allows me to, at least, make it public and not worry much.
It could also be a configuration of a proxy if you connect through a proxy. ClickHouse has built-in HTTP API, so you can query it directly from the browser or put it behind Cloudflare, etc... Where do you host ClickHouse?
If your analysis in Postgres was based on multiple tables and required a lot of JOIN operations, I don't think ClickHouse is a good choice. In such cases, you often need to denormalize multiple data tables into one large table in advance, which means complex ETL and maintenance costs.
For these more common scenarios, I think StarRocks (www.StarRocks.io) is a better choice. It's a Linux Foundation open-source project, with single-table query speeds comparable to ClickHouse (you can check Clickbench), and unmatched multi-table join query speeds, plus it can directly query open data lakes.
What else could you possibly need? Backing up is built into it with S3 support: https://clickhouse.com/docs/en/operations/backup#configuring...
Upgrades are a breeze: https://clickhouse.com/docs/en/operations/update
People insist that OMG MAINTENANCE I NEED TO PAY THOUSANDS FOR MANAGED is better, when in reality, it is not.
I have been working on Scratchdata [1], which makes it easy to try out a column database to optimize aggregation queries (avg, sum, max). We have helped people [2] take their Postgres with 1 billion rows of information (1.5 TB) and significantly reduce their real-time data analysis query time. Because their data was stored more efficiently, they saved on their storage bill.
You can send data as a curl request and it will get batch-processed and flattened into ClickHouse:
curl -X POST "http://app.scratchdata.com/api/data/insert/your_table?api_ke..." --data '{"user": "alice", "event": "click"}'
The founder, Jay, is super nice and just wants to help people save time and money. If you give us a ring, he or I will personally help you [3].
[1] https://www.scratchdb.com/ [2] https://www.scratchdb.com/blog/embeddables/ [3] https://q29ksuefpvm.typeform.com/to/baKR3j0p?typeform-source...
Now, the postgres schema wasn't ideal, and we could have saved ~ 3x on it with corresponding speed increases for queries with a refactor similar to the clickhouse schema, but that wasn't really enough to move the needle to near real-time queries.
Ultimately, the entire clickhouse DB was smaller than the original postgres primary key index. The index was too big to fit in memory on an affordable machine, so it's pretty obvious where the performance is coming from.
I guess maybe the cost?
Generally speaking I've found it manageable if you make good use of partitioning and do incremental aggregation (we use dbt, though you have to do some macro gymnastics to make the partition key filter eligible for pruning due to restrictions on use of dynamic values https://docs.getdbt.com/docs/build/incremental-models)
It's also important to monitor your cost and watch for the point where switching from the per-tb queried pricing model to slots makes sense.
Simultaneously this change may not make sense for Lago as an open-source project self-hosted by a single tenant.
But that may also mean that it effectively makes sense for Lago as a business... to make it harder to self host.
I don't at all fault Lago for making decisions to prioritize their multi-tenant cloud offering. That's probably just the nature of running open-source SaaS these days.
I'm struggling with pg write performance ATM and want some tips.
It is extremely cost effective when you can scale a different workload without migrating.
While postgres works fine (even it is slower, but actually returns results)
For a lot of what people may want to do, they'd probably notice very little difference between the three.
PG can handle a billion rows easily.
I super love PG but PG is too far away from that.
If so, is it a gdpr compliant storage solution? I am asking it since gdpr compliance may require data deletion (or at least anonimization)
This in turn, creates read+write load. Modern OLAP db’s often support it, often via mitigating strategies to minimise the amount of extra work they incur: mark tainted rows, exclude them from queries, and clean up asynchronously; etc.
There will likely be a good OLAP solution (possibly implemented as an extension) in Postgres in the next year or so. Many companies are working on it (Hydra, Parade[0], etc.)
ParadeDB - AGPL License https://github.com/paradedb/paradedb/blob/dev/LICENSE
Hydra - Apache 2.0 https://github.com/hydradatabase/hydra/blob/main/LICENSE
also hydra seems derived from citusdata's columnar implementation.