I have 2 paid Scalar DBs on PlanetScale and I have no intention of moving elsewhere but it does kill me that they both sit almost unused ~10 months out of the year (I have bursty traffic and only during the events, in-person events, that the software is built for). At ~$348/yr per DB it's still a steal compared to managing it all myself but I look at my usage (even during my "busy" months) and I barely make a dent in the usage tier I'm on. In fact I think you could total up my total usage for the lifetime of my account (both DBs) and they wouldn't total up to 1 month of the usage tier.
Again, I'm not complaining and the cost is manageable but I did create and sell some new software in the last year that I built on DynamoDB (in part to learn, in part due to costs). My software that uses PS is single-tenant so I need 1 per client which is on me, if I was able to rewrite it to be multi-tenant then I'd only have to pay $348 total a year instead of per-client.
All in all I have had nothing but good experiences with PlanetScale from the product itself to the support staff. I love the migrations and the rollback support, it feels natural when you start using it and dealing with migrations in other DBs feels like a huge pain once you've done it in PlanetScale.
Would you happen to know the magnitude of how many simultaneous connections could $348/yr buy me in PlanetScale?
One of my clients hovers around 50 simultaneous connections to their main MySQL server on a normal day, but they have bursts of 3k simultaneous connections for an entire day, twice per year.
Their workload is about 5% writting (INSERTs, UPDATEs), 95% reading (SELECTS).
disclosure: not a employee or investor in Supabase, but I sure am a fan.
Edit: Looking again, I think the instance used for comparison is `db.r6gd.xlarge` from the Multi AZ-deployments (two standbys) list. That is $1.445/h, so $1054 / month. The difference could be for storage and I/O.
However, the PS Scaler Pro is $1.5 / GB, which is quite a lot. General purpose storage in AWS is only $0.115. The comparison table uses 10 GB only, but if the DB size is 1 TB, then RDS would be a lot cheaper?
Please correct me if I got something wrong, I'm sure there's stuff I'm missing.
The value I found is being able to do multi-region read replicas with no compute overhead for lower traffic geos.
I like the idea of PS and have toyed around with the idea of migrating to it but there are some glaring issues I don’t want to deal with:
- no native way to export backups and avoid vendor lock in (or pay for the row reads to generate regular backups)
- contradictory cost model. Their pricing page reads “Every time a query retrieves a row from the database, it is counted as a row read. Every time a row is written to the database, it is counted as a row written.” while their docs state “Rows read is a measure of the work that the database engine does, not a measure of the number of rows returned. Every row read from a table during execution adds to the rows read count, regardless of how many rows are returned.”
It’s very common for scale-out architectures to read more data than is ultimately returned, because the former is pulled from individual shards and then some centralized filtering / post processing is applied in some API middleware layer.
Trying to fix that by pushing down more of the query/execution is sometimes but not always feasible or practical.
GP storage is pretty slow (relatively). I hope PS isn't giving us that out of the box and does provision more IOPs etc.
AWS bills for absolutely everything as PS mentioned in the article. This can include temporary files and what not that can take up a large chunk of your storage.
> because it doesn't specify the exact instance type used on the AWS side
That in a way is problematic because CPU generations matter and even if PS is using the "best" currently, how do we know if they'd keep up? Prices could swing the other way quite easily.
2.5$ per GB in "Scalar" and 1.5$ per GB in "Scalar PRO" compared to 0.11$ RDS for General purpose (or 0.125$ for provisioned + the IOPS you use, double that for multi-az), Supabase at 0.125$, Firebase at 0.1725$,DynamoDB at 0.25$, MongoDB Atlas serverless at 0.25$, cockroachDB serverless at 0.5$ per GB, FaunaDB at 1$ per GB. (Neon says it's 0.000164 per GiB, but somethings seems off, it's not at the same scale, so I'm guessing there's a catch here)
Yes, there's no comparison on the IOPs or anything else and what counts as storage. They all count slightly differently and so unless you account for it all it's a bit moot.
AWS alone has different tiers of storage from HDD to NVME ssd. The pricing varies greatly.
RDS will literally count anything including temporary files as usage of storage. Planetscale claims not to.
While you're not necessarily missing something, it's worth pointing out that usually storage costs don't dominate the bill, the compute costs are usually higher. But with the PlanetScale storage price being more then 10x the price of the other, it's definitely something to keep in mind, and it can dominate the bill.
I don't believe that will ever happen due to the underlying Vitess tech [0] that PlanetScale uses
Are there any public discussions of more trade-offs vitess has to make to enable fks?
Neon is a bit different from PlanetScale. With PlanetScale, they're running Vitess under the covers. Vitess is a proxy that sits between you and the MySQL database. It means they can do fancy things like rewrite the queries on the way to the database or route them differently. For example, a "SELECT" query could be sent to a replica instate of the primary. Likewise, if you've sharded/partitioned your data by customer_id, Vitess could see the query says "WHERE customer_id = 5" and send it to the correct server. This proxy also means that Vitess can seamlessly manage some things for you like failing over to a new database or bringing online a new replica.
Neon is a decoupling of storage from the database processing. While Vitess still has MySQL storing the data, Neon changes the storage so that it can separate data processing and data storage. This is somewhat like Amazon's Aurora.
If you're interested in other PostgreSQL compatible options, CockroachDB is PostgreSQL compatible, but is more of a fully distributed database from the ground up rather than new layers like Neon or PlanetScale.
At a glance, their Data Processing Addendum[0] seems to address GDPR, along with country-specific regulations like Swiss DPA and UK GDPR