Here’s the Apache Iceberg table format specification:
https://iceberg.apache.org/spec/
As they like to say in patent law, anyone “skilled in the art” of database systems could use this to build and query Iceberg tables without too much difficulty.
This is nominally the Delta Lake equivalent:
https://github.com/delta-io/delta/blob/master/PROTOCOL.md
I defy anyone to even scope out what level of effort would be required to fully implement the current spec, let alone what would be involved in keeping up to date as this beast evolves.
Frankly, the Delta Lake spec reads like a reverse engineering of whatever implementation tradeoffs Databricks is making as they race to build out a lakehouse for every Fortune 1000 company burned by Hadoop (which is to say, most of them).
My point is that I’ve yet to be convinced that buying into Delta Lake is actually buying into an open ecosystem. Would appreciate any reassurance on this front!
Editing to append this GitHub history, which is unfortunately not reassuring:
https://github.com/delta-io/delta/commits/master/PROTOCOL.md
Random features and tweaks just popping up, PR’d by Databricks engineers and promptly approved by Databricks senior engineers…
Delta itself seems fairly open-source: https://github.com/orgs/delta-io/projects/10/views/1 and hopefully someone will implement Liquid Clustering!
Sentiment echoed.
I’m ultra cautious of stuff offered by databricks in general. I think they’re only nominally open source, and shouldn’t be trusted.
I’ve also used Delta lake before, there were some really frustrating shortcomings and a lot of “sharp edges” in its usage. We ended up dropping that project entirely, but did investigate iceberg at the time as well. Iceberg and hudi had more coherently designed feature sets, but were less supported. Really hoping this changes more in future.
Over the past six months I got the impression that Delta is pulling ahead in the race as Iceberg is struggling to provide tools for people not in the JVM ecosystem. Delta is a lot more accessible in that way.
https://duckdb.org/docs/extensions/iceberg.html
You still need Spark to generate the Iceberg metadata though.
I suppose anything is better than Delta Lake. Especially Iceberg.
Massive corp, with their own opaque interests and endless bodies to throw at problems has picked a favourite. That favourite being an “open” format controlled by another opaque enterprise company. I’d half expect M$ to just take it wholesale, and start modifying it to suit their own ends, until eventually the “open source” component is some skin-deep façade that completely and utterly dependent on M$ infra.
Feels a bit like, “If Delta Lake did not exist, Microsoft would have to invent it.”
The choice between the two resembles the choice between Parquet and ORC circa 2016. Two formats of broadly the same power, initially biased by a particular query engine, eventually at feature parity and universally supported.
We have got a decade of experience with OSS from Databricks so doubting their "open ecosystem" status seems a little theoretical.
Most people use Hive partitioning convention (i.e. directory names like /key3=000/key2=002/) but Iceberg goes farther than this by exposing even more structure to the query engine.
In a traditional DBMS like Postgres, the schema, the query engine and the storage format come as a single package.
But with big data, we're building database components from scratch, and we can mix and match. We can use Iceberg as a metadata format, DuckDB as the query engine, Parquet as the storage format, and S3 as the storage medium.
It means that the storage and much of the processing is being standrdised so that you can move between databases easily and almost all tools will eventually be able to work with the same set of files in a transactionally sound way.
For instance, Snowflake could be writing to a file, a data scientist could be querying the data live from a Jupyter notebook, and ClickHouse could be serving user facing analytics against the same data with consistency guarantees.
If the business then decide to switch Snowflake to Databricks then it isn’t such a big deal.
Right now it isn’t quite as fast to query these formats on S3 as a native ingestion would be, but every database vendor will be forced by the market to optimise for performance such that they tend towards the performance of natively ingested data.
It’s a great win for openness and open source and for businesses to have their data in open and portable formats.
Lakehouse has the same implications. Lots of companies have data lakes and data warehouses and end up copying data between the two. To query the same set of data and have just one system to manage is equally impactful.
It’s a very interesting time to be in the data engineering world.
This assumes that their internal storage format has nothing to do with decades of engineering infrastructure that they built their business model around and that they would simply give all that up and compete based on just their compute layer. snowflake might as well shutup shop and return billions to the investors. Locking in data into their ecosystem is their whole business model.
Is there as good example of open standard forcing companies to give up their proprietary tech ?
> Is there as good example of open standard forcing companies to give up their proprietary tech ?
UNIX -> Linux, BSD
Oracle/Sybase -> MySQL/PostgresQL
Symbolics/Lucid -> Common Lisp
Altair/Apple/Commodore/Atari -> IBM PC & clones
VMWare -> QEMU
Basically every tech that Google pioneered and then missed out on commercializing. Protobufs -> Avro/Parquet, MapReduce -> Hadoop, Flume -> Spark, Chubby -> Zookeeper, Borg -> Kubernetes, etc.
Disclaimer: I work at Snowflake literally on this with my team. :)
I mean, we can dream right?
There’s a bunch of companies that I don’t believe deserve their status or valuation and Snowflake is one of them.
The best way to build your own non-JVM lakehouse is to use Iceberg for metadata, Parquet for the Data, Query with DuckDB using Arrow tables (read Parquet directly into Arrow is very low cost), and then use Arrow->Pandas or Polars (either directly or via a service with Arrow Flight).
If you put Feather in the mix, the whole Python lakehouse stack doesn't currently work.
Querying a lake directly and having transactions, governance etc against one set of data (a data Lakehouse) can really simplify the stack and take out cost.
A data lake is a collection of different types of structured/unstructured data like CSV, Parquet, text, images, etc. stored in an object store or some such that in principle you're able to query. The theory is that you can just dump stuff into a kitchen drawer (ELT instead of ETL) and be able to do analytics on it later.
But most enterprises already have huge investments in relational databases (SQL Server, Oracle etc.) which are decades-old optimized, typed with schema, structured engines for storing data. If you have a SQL database, chances are you already have data in the right format for analytics and building a data lake is the wrong way to go.
People in tech companies have this wrong impression that enterprises have a lot of big data, but the fact is, most of the valuable data in most companies are less than a few terabytes total. They're mostly ERP data, Excel files, and operational data from various sensors (if that).
To unstructure the (already structured) data just so it can fit into the data lake seemed like the wrong strategy, but I was surprised how much companies like Cloudera and others hyped it up so much so they could sell technologies like Hive, Spark, Presto, etc. (and streaming tech like Kafka). These are overkill for most enterprises.
A lakehouse is basically an attempt to get most of the DWH benefit by just making these ad hoc intepretations incremental.
It is definitely jarring in my head every time I hear it or read it.
I didn't look into Iceberg since, but plan to, and I am really looking forward for this to develop. We have the tools and the compute power today to deal with data without legacy tech, and not all data is big data either. Consequently "data engineering", thankfully, resembles the regular back-end development more and more, with its regular development practices being put in place.
So, here is to the hope of having a pure Python Iceberg lib some day very soon!
Overall this kind of architecture is just awesome.
You need to define what "latency" means in your case and what is "quite high levels". We are talking about analytical data storage, it is designed for efficient batch processing. To find a single record is not a primary goal of the architecture - you will need some kind of caching/indexing for fast search. Sometimes adding "limit 1" for your single record search may solve the problem.
Be sure you are using efficent data storage format as parquet, check size of the files to be sure you don't have the ["small file problem"](https://www.royalcyber.com/blog/data-services/managing-small...), then check if you are using relevant BigQuery features. And before and after those checks run "explain" on your query, if you don't use partition keys or indexed columns your search results won't be instant in any big data system.
Ex: Recently created a big table (by materializing fact/dimension table joins and COALESCE operations) solely for analysis purposes. It sits "outside" our normal data warehouse setup mentally, but we can still maintain data quality/lineage as it exists inside dbt. Allowed us to do away with Tableau fixed calculations and cut load/group by times for end users ~95%.
I really like this attitude and have started embracing it myself both on paper and in notes on my website.
It seems inevitable, more of a when vs if. Being able to have our cake & eat it too will be very cool :)
Overall, I like the whole concept of the Lakehouse because it can be done cheaply.
Most datalakes turn into swamps pretty quickly, so cheaper is better.
Let it sit unused for a while in S3 and then quietly nuke it without burning money on a big compute environment.
>> Understanding Parquet, Iceberg and Data Lakehouses at Broad
I am not a native speaker myself; but suspected the intention of the author was more or less the same -- thanks for confirming
There was a paper at VLDB about Delta Lake: https://www.vldb.org/pvldb/vol13/p3411-armbrust.pdf - it describes why it was created, plus details of implementation.
Except when it uses semi-colons. Or pipes. Or something else.
When the delimiter is different, you have a different file format. TSV (tab-separated values) is an example of this.
https://en.m.wikipedia.org/wiki/C0_and_C1_control_codes#Fiel...
Now, with PyIceberg, there is read support in Python. Write support should be merged very soon - https://github.com/apache/iceberg-python/pull/41 So, very soon, you will be able to read/write Iceberg tables in Python. I look forward to doing data transformations in Polars for data of reasonable scale (up to 100GB or so) and writing to Iceberg tables with PyIceberg. No Spark.
A lot of data lakes are managed using Hadoop and Spark so I think it’s just an artefact of that.
In the end I can’t see why you wouldn’t just be able to create and manage Iceberg files directly from a standard Python/JS/Java without that legacy.
Is this a typo: “Hive, Delta Lake and Iceberg all support support of schema registry or metastore.”?
My thinking goes as follows: I'm trying to read chunks from n-dimensional data with a minimum of skips/random reads. For user-facing analytics and drilling down into the data, these chunks tend to be relatively few, and I'd like to have them close to one another. For high-level statistics however, I only care that the data for each chunk of work be contiguous, since I'm going to read all chunks eventually anyways.
You can reach these goals with a partitioning strategy either in HDF or zarr or parquet, but you could also reach it with blob fields in a more traditional DB, be it relational or document based or whatever. Since any storage and memory is linear, I don't care whether a row-major or column-major array is populated from a 1d vector from columnar storage with dimensionality metadata or an explicitly array based storage format; I just trust that a table with good columnar compression doesn't waste too much storage on what is implicit in (dense) array storage.
Often, I've found that even climatological data _as it pertains to a specific analytic scenario_ is actually a sparse subset of an originally dense nd-array, e.g. only looking at data over land. This has led me to advocate for more tabular approaches, but this is very domain specific.
You need to step back and look from a broader perspective to understand this domain.
Talking about arrow/parquet/iceberg is like talking about InnoDB vs MyISAM when you're talking about databases; yes, those are technically storage engines for mysql/mariadb, but no, you probably do not care about them until you need them, and you most certainly do not care about them when you want to understand what a relational DB vs. an no-SQL db are.
They are technical details.
...
So, if you step back, what you need to read about is STAR SCHEMAS. Here are some links (1), (2).
This is what people used to be before data lakes.
So the tldr: you have a big database which contains condensed and annotated versions of your data, which is easy to query, and structured in a way that is suitable for visualization tools such as PowerBI, Tableau, MicroStrategy (ugh, but people do use it), etc. to use.
This means you can generate reports and insights from your data.
Great.
...the problem is that generating this structured data from absolutely massive amounts of unstructured data involves a truly colossal amount of engineering work; and it's never realtime.
That's because the process of turning raw data into a star schema was traditionally done via ETL tools that were slow and terrible. 'Were'. These tools are still slow and terrible.
Basically, the output you get is very valuable, but getting it is very difficult, very expensive and both of those problems scale as the data size scales.
So...
Datalakes.
Datalakes are the solution to this problem; you don't transform the data. You just injest it and store it, basically raw, and on the fly when you need the data for something, you can process it.
The idea was something like a dependency graph; what if, instead of processing all your data every day/hour/whatever, you defined what data you needed, and then when you need it, you rebuild just that part of the database.
Certainly you don't get the nice star schema, but... you can handle a lot of data, and what you need to do process it 'adhoc' is pretty trivial mostly, so you don't need a huge engineering effort to support it; you just need some smart table formats, a lot of storage and on-demand compute.
...Great?
No. Totally rubbish.
Turn out this is a stupid idea, and what you get is a lot of data you can't get any insights from.
So, along come the 'nextgen' batch of BI companies like databricks so they invent this idea of a 'lake house' (3), (4).
What is it? Take a wild guess. I'll give you a hint: having no tables was a stupid idea.
Yes! Correct, they've invented a layer that sits on top of a data lake that presents a 'virtual database' with ACID transactions that you then build a star schema in/on.
Since the underlying implementation is (magic here, etc. etc. technical details) this approach supports output in the form we originally had (structured data suitable for analytics tools), but it has some nice features like streaming, etc. that make it capable of handling very large volumes of data; but it's not a 'real' database, so it does have some limitations which are difficult to resolve (like security and RBAC).
...
Of course, the promise, that you just pour all your data in and 'magic!' you have insights, is still just as much nonsense as it ever was.
If you use any of these tools now, you'll see that they require you to transform your data; usually as some kind of batch process.
If you closed your eyes and said "ETL?", you'd win a cookie.
All a 'lake house' is, is a traditional BI data warehouse built on a different type of database.
Almost without exception, everything else is marketing fluff.
* exception: kafka and streaming is actually fundamentally different for real time aggregated metrics, but its also fabulously difficult to do well, so most people still don't, as far as I'm aware.
...and I'll go out on a limb here and say really, you probably do not care if your implementation uses delta tables or iceberg; that's an implementation detail.
I guarantee that correctly understanding your domain data and modelling a form of it suitable for reporting and insights is more important and more valuable than what storage engine you use.
[1] - https://learn.microsoft.com/en-us/power-bi/guidance/star-sch... [2] - https://www.kimballgroup.com/data-warehouse-business-intelli...
[3] - https://www.snowflake.com/guides/what-data-lakehouse [4] - https://www.databricks.com/glossary/data-lakehouse
Depending on your use case this may or may not be a problem. For most companies I'd wager that it is a bigger problem than it first appears.
Isn't this kind of obsolete in 2023 with LLMs?
Sure, AI is currently slow, and very expensive, but eventually the idea of needing to query a massive unstructured data source is something that will go the way of the dodo bird when you have a technology that can magically turn unstructured data into structured data quite efficiently. And in that case, when your data is properly structured, there are database technologies that are 1000x more efficient than all of these MapReduce-esque solutions for BI.
And you don't need that pesky transformation part.
Except you really do, when you get beyond having a source system or two.
If I'm already going through the trouble of doing ELT/ETL and making a clean copy of the raw data, why would I do that in cloud storage and not in an actual database?
I don't echo your dismissal of the idea because a whole lot of people seem to be excited about it. But I personally feel like I'm missing the use case compared to the lake + warehouse setup.
Is it about distributing responsibility across teams? Reducing storage cost? Open source good vibes?
Maybe a legitimate use case is being able to use the same data source for multiple query engine frontends? That is, you can use both Spark and Snowflake on the same physical data files.
I'd be interested to hear about this from someone who's using or planning to use a lakehouse.
* Storing large amounts like petabytes in any database is phenomenally expensive, just for the storage alone.
* For some kinds of data, like image data, databases are generally the wrong tool.
* The consumers of these kinds of systems may have really dynamic workloads. Imagine ML jobs that kick off 1K machines simultaneously to hammer your DB and read from it as fast as possible. Cloud-managed object stores have solved this scaling issue already. If you can get infrastructure you manage out of the way, you get to leverage that work. If your DB is in the middle, you're on call for it.
Well, depends on your requirements. You can definitely go point-to-point straight into another DB.
One reason to keep data in object storage, is it gives you a sort of “db independent” storage layer. At a previous $work, we had a tiered system: data would come in from source systems (primary application db’s, marketing systems, etc), and would be serialised verbatim in structured format in S3 (layer 1). Data eng systems would then process that data-refining it, enriching it, ensuring types and schemas, etc, which would be serialised into the next tier (layer 2). At this level they’d be nice to use, so the data analysts would operate against this data in their spark notebooks.
BI and reporting, and other applications could either use data from this layer directly, or if they had special requirements, or performed computationally difficult enough tasks, we would add another layer (layer 3) for specialised workloads and presentation layers. Layer 2 and 3 data may also be synced into data warehouses like ClickHouse.
This gave us complete lineage of data (no more mystery tables, no more “where did you get this data from”, etc), and the storage itself is reasonably cheap. Many services can query these storage layers directly, so setting up views, or projections into different layouts - even for huge quantities of data- becomes feasible and achievable with no more engineering effort than a query.
Was it a lot? Yep. Would I recommend or do it everywhere? Absolutely, 100% no I would not. Was it a good fit for that org? Yeah, arguably better than they could utilise, but for them, other approaches were anaemic and fragile at best.
Could it be done simpler now? Yep, but it got the job done then haha.
Edit to add: it was also language agnostic, which was a huge win and is an understated part of these new parquet-based solutions: you’re no longer limited to “fragile python app” or “spark cluster” to interact with your data. Rust, C#/F#, various FE tools for JS/TS (cube, etc). This is a huge win because you’re not longer tied to keeping around an aging spark/hadoop cluster that has gradually encrusted more garbage into it until it’s this massive, ultra-fragile time bomb nobody dares touch that powers mass amounts of back-office-business needs.
I didn't miss it; it's irrelevant.
It makes, almost no difference in practice, between a competent implementation in one and a competent implementation in the other.
It makes absolutely no difference that they are open source.
Understanding the details of each of the individual components will give you no meaningful insight into how to build a lakehouse.
...because, when you slap all those parts together, in whatever configuration you've picked what you end up with is a database.
A big, powerful cloud database.
Well, you have a database now and you still have zero insights and zero idea how to get any of them; that because you didn't understand that you need to build some kind of data warehouse on top of that database. You need to load the data. You need to transform the data. You need to visualize the data and build reports on it. If you're good, you probably need to preprocess the data to use as training inputs.
I'll say it more clearly and explicitly one. more. time:
- Having a database != having a data warehouse.
- Having a big cloud database build out of cloud storage, table formats, metadata engines and query engines != a lakehouse.
Having an empty database is of no value to anyone, no matter how good it is.
All of those parts, all of those things are only the first step. It's like installing postgres. Right, good job. We're done here? Reports? Oh, you can probably import something or something or I know, powerBI is good, let's install that. It'll tell you you have no data... but... we've got the infra now right? Basically done.
It's just step 1.
We interface with BigQuery (via Airflow) mostly, and except one very annoying situation it's a big improvement in terms of speed (parsing floats after querying the DB is NEVER a good option).
---
In case anyone's wondering, it's basically storing and loading native numpy arrays in BigQuery via the python client(s).
You have a bunch of options (assume you have one or more cols with float32 numpy arrays):
- dataframe -> to_parquet -> upload to GCS -> GCSToBigQueryOperator (https://airflow.apache.org/docs/apache-airflow-providers-goo...)
-> instead of storing as a `FLOAT, REPEATED` it will be stored as a STRUCT with a structure of `list>item` OR `list>element` (pyarrow==11 OR pyarrow==13).This requires a manual parsing from this 'json structure' that you get when querying the DB back to np.array -> slow and basically you are using CSVs again.
-> Read more: https://stackoverflow.com/questions/68303327/unnecessary-list-item-nesting-in-bigquery-schemas-from-pyarrow-upload-dataframe
-> set the schema before uploading? Nope, all values will uploaded as null in BQ.
- dataframe -> bigquery.Client -> upload the dataframe from python - very slow, you need to batch your data (imagine 24h vs 5 minutes kind of slow as dataframe sizes increase + necessity to keep all data in memory or batch it so extra save/load of each batch before uploading)
- arrays are stored properly
- solution: you must do 2 things, one on the pyarrow side and one on the BigQuery side - `df.to_parquet(..., use_compliant_nested_type=True)` (in pyarrow==14 it's True by default, but airflow needs pyarrow==11, where it's False by default)
- use `enable_list_inference=True` (link: https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-parquet#list_logical_type)
- when both of this are true (i.e. save parquet files [to GCS] using that flag and load parquet files [from GCS to BQ] using the other flag arrays can be stored as (FLOAT, REPEATED) and queried as numpy arrays out of the box without any manual management.
This took me like 1 week of debugging and reading source code, obscure SO comments and GH issues etc.The CSV format (or lack of) is such a mess. You don't appreciate how much until you have to write a CSV parser and do real world support for it. Ugh.