The Insights tab also surfaced missing indexes we added, which sped things up further. Early days, but so far so good.
Sure you can click around to determine but this always annoys me. Like everyone should know what your product is and does and all you service names. Put it front and center at the top!
> Our mission is simple: bring you the fastest and most reliable databases with the best developer experience. We have done this for 5 years now with our managed Vitess product, allowing companies like Cursor, Intercom, and Block to scale beyond previous limits.
> We are so excited to bring this to Postgres. Our proprietary operator allows us to bring the maturity of PlanetScale and the performance of Metal to an even wider audience. We bring you the best of Postgres and the best of PlanetScale in one product.
Seriously??
What is PlanetScale for Postgres?
Our mission is simple: bring you the fastest and most reliable databases with the best developer experience. We have done this for 5 years now with our managed Vitess product, allowing companies like Cursor, Intercom, and Block to scale beyond previous limits.
> PlanetScale is the world’s fastest relational database platform. We offer PostgreSQL and Vitess databases that run on NVMe-backed nodes to bring you scale, performance, reliability, and cost-efficiencies — without sacrificing developer experience.
> PlanetScale is a relational database platform that brings you scale, performance, and reliability — without sacrificing developer experience.
> We offer both Vitess and PostgreSQL clusters, powered by locally-attached NVMe drives that deliver unlimited IOPS and ultra-low latency.
> PlanetScale Metal is the fastest way to run databases in AWS or GCP. With blazing fast NVMe drives, you can unlock unlimited IOPS, ultra-low latencies, and the highest throughput for your workloads.
> The world’s fastest and most scalable cloud databases PlanetScale brings you the fastest databases available in the cloud. Both our Postgres and Vitess databases deliver exceptional speed and reliability, with Vitess adding ultra scalability through horizontal sharding.
> Our blazing fast NVMe drives unlock unlimited IOPS, bringing data center performance to the cloud. We offer a range of deployment options to cover all of your security and compliance requirements — including bring your own cloud with PlanetScale Managed.
Ironically, the _how_ is a major topic of the very page you started on (the blog).
Have some agency.
If you are interested in their new technology that extends on hosted postgres check out Neki https://www.neki.dev/
> To create a Postgres database, sign up or log in to your PlanetScale account, create a new database, and select Postgres.
It does mention the sign up option but doesn't really give me much context about pricing or what it is. I know a bit, but I get confused by different database offerings, so it seems like a missed opportunity to give me two more sentences of context and some basic pricing - what's the easiest way for me to try this if I'm curious?
On the pricing page I can start selecting regions and moving slides to create a plan from $39/month and up, but I couldn't easily find an answer to if there's a free trial or cheaper way to 'give it a spin' without committing.
It's designed for businesses that need to haul ass
> It's designed for businesses that need to haul ass
Could you elaborate what you meant by this for my education?
Also totally OK if planetscale doesn't do this and that $39/month _is_ the best way to try them out, I just think it would be good for them to make explicit in the article what I should do if I think I might want it but want to try it.
* Do you support something like Aurora Fast Cloning (whether a true CoW fast clone or detaching a replica _without_ promoting it into its own cluster / branch with its own replicas, incurring cost)?
* Can PlanetScale Postgres set `max_standby_streaming_delay` to an indefinite amount?
* The equivalent of Aurora blue/green would be to make a branch and then switch branches, right?
We have not made max_standby_streaming_delay configurable yet. What's your use case?
I don't fully parse your question about blue/green. can you expand your question please? is this for online updrades?
For context we are on Aurora Postgres right now, with several read replicas.
I did an interview all about PlanetScale Metal a couple of months ago: <https://www.youtube.com/watch?v=3r9PsVwGkg4>
For example, MySQL was easier to get running and connect to. These cloud offerings (Planetscale, Supabase, Neon, even RDS) have solved that. MySQL was faster for read heavy loads. Also solved by the cloud vendors.
MySQL pros:
The MySQL docs on how the default storage engine InnoDB locks rows to support transaction isolation levels is fantastic. [1] This can help you better architect your system to avoid lock contention or understand why existing queries may be contending for locks. As far as I know Postgres does not have docs like that.
MySQL uses direct I/O so it disables the OS page cache and uses its own buffer pool instead[2]. Whereas Postgres doesn't use direct I/O so the OS page cache will duplicate pages (called the "double buffering" problem). So it is harder to estimate how large of a dataset you can keep in memory in Postgres. They are working on it though [3]
If you delete a row in MySQL and then insert another row, MySQL will look through the page for empty slots and insert there. This keeps your pages more compact. Postgres will always insert at the bottom of the page. If you have a workload that deletes often, Postgres will not be using the memory as efficiently because the pages are fragmented. You will have to run the VACUUM command to compact pages. [4]
Vitess supports MySQL[5] and not Postgres. Vitess is a system for sharding MySQL that as I understand is much more mature than the sharding options for Postgres. Obviously this GA announcement may change that.
Uber switched from MySQL to Postgres only to switch back. It's a bit old but it's worth a read. [6]
Postgres pros:
Postgres supports 3rd party extensions which allow you to add features like columnar storage, geo-spatial data types, vector database search, proxies etc.[7]
You are more likely to find developers who have worked with Postgres.[8]
Many modern distributed database offerings target Postgres compatibility rather than MySQL compatibility (YugabyteDB[9], AWS Aurora DSQL[10], pgfdb[11]).
My take:
I would highly recommend you read the docs on InnoDB locking then pick Postgres.
[1] https://dev.mysql.com/doc/refman/8.4/en/innodb-locking.html
[2] https://dev.mysql.com/doc/refman/8.4/en/memory-use.html
[3] https://pganalyze.com/blog/postgres-18-async-io
[4] https://www.percona.com/blog/postgresql-vacuuming-to-optimiz...
[6] https://www.uber.com/blog/postgres-to-mysql-migration/
[7] https://www.tigerdata.com/blog/top-8-postgresql-extensions
[8] https://survey.stackoverflow.co/2024/technology#1-databases
I've done extensive work on improving the Postgres B-Tree code, over quite a number of releases. I'm not aware of any problems with high-insert workloads in particular. I have personally fixed a number of subtle issues that could lead to lower space utilization with such workloads [1][2] in the past, though.
if there's a remaining problem in this area, then I'd very much like to know about it.
[1] https://www.youtube.com/watch?v=p5RaATILoiE [2] https://speakerdeck.com/peterg/nbtree-arch-pgcon
Postgres is involved somehow. I get that.
the very first line:
> The world’s fastest and most scalable cloud databases
the second line:
> PlanetScale brings you the fastest databases available in the cloud. Both our Postgres and Vitess databases deliver exceptional speed and reliability, with Vitess adding ultra scalability through horizontal sharding.
i know exactly what they do. zero fluff. and, i'm now interested.
> Our mission is simple: bring you the fastest and most reliable databases with the best developer experience.
The product we are GA'ing today has the option of PlanetScale Metal which is extremely fast and scales write QPS further than any of the other single-primary Postgres hosts.
https://planetscale.com/benchmarks/aurora
Seems a bit better, but they benchmarked on a kind of small db (500gb db / db.r8g.xlarge)
We're presently in a migration for our larger instances on Heroku, but were able to test on a new product (fairly high writes/IOPs) and it's been nice to have more control vs. Heroku (specifically, ability to just buy more IOPs or storage).
Had one incident during the beta which we believed we caused on our own but within 5 minutes of pinging them they had thrown multiple engineers on it to debug and resolve quickly. For me, that's the main thing I care about with managed DB services as most tech is commoditization at this point.
Just wish the migration path from Heroku was a tad easier (Heroku blocks logical replication on all instances) but pushing through anyway because I want to use the metal offering.
So yes, the data per-node is ephemeral, but it is redundant and durable for the whole cluster.
I did an interview all about PlanetScale Metal a couple of months ago: <https://www.youtube.com/watch?v=3r9PsVwGkg4>
"We guarantee durability via replication". I've starting noticing this pattern more where distributed systems provide durability by replicating data rather than writing it to disk and achieving the best of both worlds. I'm curious
1. Is there a name for this technique?
2. How do you calculate your availability? This blog post[1] has some rough details but I'd love to see the math.
3. I'm guessing a key part of this is putting the replicas in different AZs and assuming failures aren't correlated so you can multiply the probabilities directly. How do you validate that failures across AZs are statistically independent?
Thanks!
[1] https://planetscale.com/blog/planetscale-metal-theres-no-rep...
Reboots typically don't otherwise do anything special unless they also trigger a host migration. GCP live migration has some mention of support though
GCP mentions data persists across reboots here https://cloud.google.com/compute/docs/disks/local-ssd#data_p...
note that stop/terminate via cloud APIs usually releases host capacity for other customers and would trigger data wipe, a guest initiated reboot typically will not.
1. https://planetscale.com/pricing?architecture=x86-64&cluster=...
1. You say "ephemeral", but my understanding is that NVMe is non-volatile so upon crash and restart we should be able to recover the state of the memory. Is is ephemeral because of how EC2 works where you might not get that same physical box and memory addresses back?
2. Can you explain what "Semi-synchronous replication" is? Your docs say "This ensures every write has reached stable storage in two availability zones before it’s acknowledged to the client." but I would call that synchronous since the write is blocked until it is replicated.
Thanks!
I read the comments and it seems that in one of them they mention between supabase vs planetscale postgres that maybe they can use a project like supabase and then come to planetscale when their project grows enough to support that decision.
How would a migration from supabase to planetscale even go and at what scale would something like that be remotely better i suppose.
Great project tho and I hope that planetscale's team doesn't get bored listening to all requests asking for a free tier like me, maybe I am addicted on those sweet freebies!
I will try to create a product one day that will have a supabase -> planetscale migration one day to know that I have made it lol (jk)
have a nice day
Ah, overlooked first sentence, read only all headings and navigation and footer:
> is now generally available and out of private preview