My concern with DB startups is always the business model. There’s a massive tension between open source and a sustainable business which is much more prevalent with deeply technical products. So either it’s too open and dies because aws eats their lunch or it’s too closed to be useful for self-hosting, or too complex to self-host, or priced too high for small players. What’s the strategy to build/grow the business in the short-medium term?
* Professional support, including on-prem hosting when applicable
* Additional features that enterprises care about (encrypted databases, SSO)
* Compliance documentation/certifications
DuckDB Labs: The core contributors. Instead of developing features that will be behind a paywall, they provide support and consulting.
DuckDB Foundation: A non-profit that ensures DuckDB remains MIT licensed.
This can work because SQLite is deliberately a very small yet very high impact project.
Very few projects (unfortunately) can boast that.
We use Hasura as the read engine. That updates a graph of mobx objects that drive the ui. We apply updates directly to those objects so the ui updates immediately. The mutations are posted back to a Python api that applies them to the db.
I’ve looked at Electric because we’ve had to recreate some of what they do to interface with Hasura. At the moment it’s a non-starter because we use pg views to shape the data for the frontend.
This seems like a good pattern, but of lower value for a SaaS app with many customers storing private data in your service. This is because the cache hit-rate for any particular company's data would be low. Is this an accurate assessment, or did I misunderstand something?
It’s worth noting that Electric is still efficient on read even if you miss the CDN cache. The shape log is a sequential read off disk.
Say Jira used electric, would you be able to put all tickets for a project behind a cdn cache key? You'd need a cdn that is able to run auth logic such as verifying a jwt to ensure you don't leak data to unauthorized users, right?
Show HN: ElectricSQL, Postgres to SQLite active-active sync for local-first apps - https://news.ycombinator.com/item?id=37584049 - Sept 2023 (171 comments)
It's dramatically faster to read from a local copy of data vs. sending the query to Postgres and can eliminate a lot of network and db load as well.
I work as part of the team building Ably LiveSync, a competitor in this space. Our postgres connector[0] works in the opposite way by taking rows from an 'outbox' table and fanning them out to subscribed clients over websockets in realtime.
Here's the thing; for any meaningfully complicated data in a relational database the data is likely to be normalised across tables with relations, and there are going to be joins. So I'm really curious how people are making single-table-sync work. (Maybe this is where Electric imagine the future of shapes, solving for joins).
In LiveSync we do it the other way around, instead of having a live-query style subscription to a table (like ElectricSQL), we listen for someone writing a row to the 'outbox' table and that row is automatically sent to subscribers. This means that you're writing your denormlised data directly to the outbox and it's being sent on to clients, rather than writing your normalised data to tables but being limited by single table sync. Or worse, your clients having to subscribe to multiple sync streams and trying and stitch the data back together. We opted to have the write-side insert the denormalised data, rather than having the read side have to stitch normalised data back together.
Electric will at some point try and solve multi table sync, which isn't easy given how the Postgres replication protocol works. It's also not easy to imagine how shapes (which are defined on the fly right now in the query string) will adapt to multiple tables. There's going to be a tradeoff between a complex query string trying to stitch the normalised tables back together, or a 'shape' becoming an entity in Electric which would define how to stitch the normalised tables back together (which you would have to CRUD manage, and update in Electric every time your schema changed).
You're right that "single-table sync" does have its limitations. At PowerSync we effectively support one level of "joins", and even then it's often not enough for more complex schemas. An older version of ElectricSQL did also actually have multi-table shape sync support, but I believe doing that at scale proved to be difficult.
One solution to this is often denormalizing data - either adding more denormalized columns in the existing table, or creating new tables dedicated to sync data. Conceptually, keeping these tables up to date is not that different from writing updates to an outbox table.
I'm also interested in seeing what Zero comes up with in the space. They seem to have solved doing multi-table query sync, but it remains to be seen how well that works in practice.
PGlite is coming — we have a new WASI build that is the basis for native mobile support. (It’s working in dev, but still needs some more polishing and bindings).
I fondly remember the days of Meteor (before the pivot to Apollo), where you'd give up SQL and in return be able to give every user a real-time live-updating data model, kept in sync with a secure subset of the central MongoDB database. Now, you don't need to give up the SQL part, nor are you locked into an entire ecosystem. We're going to see really cool things built on this.
The part where you could use drizzle client side was really what interested me, I don't want to bother learning another new query language apart from SQL.
Live demo: http://linearlite.examples.electric-sql.com
Code: https://github.com/electric-sql/electric/tree/main/examples/...
Caught a few typos on your site as well fyi, https://triplechecker.com/s/153718/electric-sql.com
I've been researching 'local first' solutions like electric recently and tried out powersync, triplit and instant for now. All three of these solve for both reading and writing to databases, with offline support.
Wondering if you have plans to support writes too.
There's a variety of valid patterns for writes & we don't want to be prescriptive about how you do them. We aim instead to help you easily use Electric with your preferred/required write pattern.
It's possible we'll write our own write library but it's equally likely we'll continue to find it better to partner with other projects building excellent libraries (like we are with https://livestore.dev/).
Before these sorts of projects, you'd have to roll your own custom sync engine which I've found to be surprisingly difficult when you factor in multiple devices.