We store Splitgraph "image" (schema snapshot) metadata in PostgreSQL itself and each image has a timestamp, so you could create a PG index on that to quickly get to an image valid at a certain time.
Each image consists of tables and each table is a set of possibly overlapping objects, or "chunks". If two chunks have a row with the same PK, the row from the latter will take precedence. Within these constraints, you can store table versions however you want -- e.g. as a big "base" chunk and multiple deltas (least storage, slowest querying) or as a multiple big chunks (faster querying, more storage).
You can query tables in two ways. Firstly, you can perform a "checkout". Like Git, this replays changes to a table in the staging area and turns it into a normal PostgreSQL table with audit triggers. You get same read performance (and can create whatever PG indexes you want to speed it up). Write performance is 2x slower than normal PostgreSQL since every change has to be mirrored by the audit trigger. When you "commit" the table (a Splitgraph commit, not the Postgres commit), we grab those changes and package them into a new chunk. In this case, you have to pay the initial checkout cost.
You can also query tables without checking them out (we call this "layered querying" [2]). We implemented this through a read-only foreign data wrapper, so all PG clients still support it. In layered querying, we find the chunks that the query requires (using bloom filters and other metadata), direct the query to those and assemble the result. The cool thing about this is you don't have to have the whole table history local to your machine: you can store some chunks on S3 and Splitgraph will download them behind the scenes as required, without interrupting the client. Especially for large tables, this can be faster than PostgreSQL itself, since we are backed by a columnar store [3].
[1] https://www.splitgraph.com/docs/getting-started/frequently-a...
[2] https://www.splitgraph.com/docs/large-datasets/layered-query...