The JSON support (SUPER type) is kind of cool, and they are moving towards more “automatic” sorting + partitioning, but it’s just all a bit shit to be honest.
We encountered major bugs with data-sharing, our clusters keep insisting that zstd is the best compression format to use for all our data (but then never actually using it), materialised views often fail to update and understanding why is a nightmare, terrible performance if your strings are varchar(max) (guess what Glue sets them to…), Redshift data often just dies (4 hour downtime recently, no status page) and has some really weird semantics around listing queries, before the data API you couldn’t run async queries and it’s eventbridge integration straight up doesn’t work, nightmare bugs in the Java connection library that don’t show up using psql, tiny set of types (no arrays, uuids), unkillable queries, AQUA actually causing everything to slow down hugely, critical release notes posted only in a fucking random forum, etc etc.
Snowflake has apparently sorted this, as well as including ingestion tools (snowpipe) that you’d otherwise have to stitch together with AWS Glue or something (a cursed service if ever there was one).
That being said, in some cases Redshift absolutely flies. But the real world isn’t filled with ideal schemas and natural sort keys. It’s messy. And Snowflake deals with messy better.
snowflake give you visiblity into clustering [1] and in the query profile view you can see how pruning is working( or not working)
Can you give an example of what you visibility you would like to see in terms of partitioning?
1. https://docs.snowflake.com/en/sql-reference/functions/system...
Edit for future readers: the original comment was "I haven’t really been impressed with Redshift, it seems like too little and too late".
basically distributing compute down to the actual storage nodes
What they emit is the dance routine of a sugar coated cheerleader squad.
I am of the view everything which is not a strength is obfuscated.
I have zero faith, confidence and trust is all information AWS emits.
I approach press releases and the docs on the basis that they cover up the actual implementation, and so my task is to find out what is actually going on under the hood, so I can actually make sense of what's been provided and operate it correctly (or avoid it completely, as it may be!)
I beleive the real advantage AWS has here is in cost. Snowflake has positioned itself as price competitive with Redshift but this is primarily due to Snowflake's ability to scale on-demand, whereas prior Redshift versions required you to size for peak usage (RA3 helped with this). In my experience Snowflake is an order of magnitude more expensive if you compare similiar workloads and do not account for idle time. We will need to see the performance of a "Redshift Processing Unit" to be sure of the advantage, but even so AWS will be able provide significant downward cost pressure through this offering.
Cost reasons is why I'm most bullish about DataBricks's FOSS https://delta.io
My experience with support/account managers is that they always tell you "yes, Redshift can do this", and the and the only way to actually get a "no" out of them is to already know Redshift cannot do something, and to explain to them why.
They won't deny reality, but you would never have got that answer from them in any other way.
I suspect the problem is the training AWS give its staff. The material they are taught is relentlessly positive and I suspect AWS staff actually have no idea what Redshift is no good for.
(Indeed, if you read the official docs for RS, which I strongly advise you never to do, you will come out the other end under the impression there is literally nothing Redshift cannot do; the docs describe everything using positive terms only.)
The advantage is the flexibility to easily change compute resource. A disadvantage is that your data is now in S3 or something very like it, and this I think alters the characteristics of write performance, for the cluster; I've not yet looked into this, but it's on the list.
You absolutely should beware of falling into the trap of imagining that serverless simply gives you flexible compute and that's the only change to behaviour.
AWS in their press releases and docs are relentlessly positive - anything which is not a strength is obfuscated - so only actual experimentation and investigation throws light on what you're really getting.
Technically, Athena is based on modified Presto while Redshift is (very) heavily modified Postgres.
Athena = Lambda + S3 (what i would call true serverless)
Redshift Serverless = Auto AWS Managed EC2 instances with local storage + S3
Although I could be wrong as I just had a quick 5 minute look at it...
Having used both I do think BigQuery is better in a lot of ways (although it's easier to make it a lot expensive too), but I'm really excited to see Redshift catch up. Adding the serverless options are really great too since my biggest complaint with Redshift was managing the quantity and type of the underlying instances.
See more details here:
BigQuery is a full database. It is significantly faster than running anything from Athena. The closest comparison on AWS is Redshift.
Load my data where? This is "serverless".
More seriously, "serverless" usually just means you aren't supposed to worry about server/cluster management, not that there are no servers anywhere. So it really means "load your data to Redshift, wherever that lives".