Now that it's GA...it looks like that hasn't changed. Is the classic, Python, App Engine standard becoming a second class citizen? Or was there some reason why this wasn't considered GA worthy for Postgres?
Trying to understand if going forward Google is trying to push everyone to the flexible environment or not - as I would have really expected connectivity between these two products.
You can connect to postgres from app engine standard... as long as its Java. See this doc https://cloud.google.com/appengine/docs/standard/java/cloud-...
And no, appengine standard is not a second class citizen. Hand-wave-ily, the connectivity path that flex uses works for postgres with minimal changes, but unfortunately some additional work is required to get appengine standard for other languages working for postgres. :(
This seems like a major omission, and AWS has had this for ages.
https://cloud.google.com/sql/docs/postgres/connect-external-...
> You can grant any application access to a Cloud SQL instance by authorizing the IP addresses that the application uses to connect.
> You can not specify a private network (for example, 10.x.x.x) as an authorized network.
> PostgreSQL instances support only IPv4 addresses. They are automatically configured with a static IP address.
Related to postgres. We have many many concurrent connections but a load satisfied by an n1-standard-4 atm do you recommend a connection pooler or something to help us get down to the 100 to 200 connections we need to be at to use cloudsql?
- Google Cloud SQL (PostgreSQL)
- Citus Cloud (AWS Only)
- Citus (managed ourselves) on GCP
While things are good for the most part, a couple of serious problems related to connectivity have us completely boggled. We're connecting from Google Kubernetes Engine, which seems like it should be a standard combination, but run into constant problems that we've dumped many many hours into debugging.
We still haven't figured this problem out. I've found the docs to be very weak on Google's part. A lot of the troubleshooting tips are not very helpful (and can consist of unhelpful broad strokes like "be sure to use indexes!"). Because Google Cloud is not as popular than AWS, there is less community guidance from others. And what guidance does exist is often in forum threads that feel less than reputable. There's a big push to try to get you to talk to sales rep that are not technically knowledgeable and just try to upsell.
Very frustrating. Unclear if moving back to AWS, or hosting our own Postgres, would help.
Highly recommended if you want a fast and featured managed db service.
Minor: less downtime for maintenance, point-in-time restore
The rest is summed up here: https://news.ycombinator.com/item?id=16872723
I started off with heroku and they don't support the same subset:
https://cloud.google.com/sql/docs/postgres/extensions
https://devcenter.heroku.com/articles/heroku-postgres-extens...
Thanks a lot for the information
Can you point me to the complaints? I will take a look.
Looks like a preset list of extensions. I'd assume custom extensions would be very difficult to support in managed postgres.
Systems are built for extension, to not allow it deprives one of essential qualities.
[1] https://cloudplatform.googleblog.com/2017/11/Cloud-SQL-for-P...
Intuition tells me that you might get better performance if you let the DB itself do the replication but I can't really justify that without real review of what happens.
The postgres docs (https://www.postgresql.org/docs/10/static/different-replicat...) say that the WAL solution has no "Master server overhead" in contrast to the File System Replication solution, but it's not explained and I'm not sure what is meant by that.
I guess with a block device based solution, recovery takes longer, because failover entails you have to actually mount the block device (as no 2 machines can mount it rw at the same time), and then start the DB (or in a more basic implementation, just boot the entire second machine as part of failover), while with WAL streaming both postgres instances would already be running? Wo failover would be faster with WAL streaming?
I would be great if somebody from GCP could elaborate what the tradeoffs here are, how long failover takes, and whether we can expect similar performance and behaviour as with WAL shipping.
[Update] That said, from what I understand, they have a road map to maintaining read replicas and queued writes. Not sure what the date on it is though.
A major client of mine migrated to AWS because of this and other issues.
To be fair, we wouldn't use GCP for anything but virtual servers and storage replication... I have no desire to tie us to Google's infrastructure any more than necessary.
Were your master and standby in the same availability zone? Can't you set diff maintenance windows? WTF?
https://cloud.google.com/sql/faq#maintenancerestart
According to the link above, you can taper your upgrade windows, it looks like.
Right now basically the options are Aurora, Citus, and running CockroachDB yourself.
Cloud SQL has only been available in regions with at least three zones (since we believe that is the minimum to make sure we can maintain HA in the event of a single zone failure). asia-southeast1 currently only has two zones, when a third zone is launched, Cloud SQL will become available in that region.
When you say you couldn't get the root certs to work... what do you mean?
Cloud SQL automatically generates server certificates, and we offer UI+API for creating additional client certificates. The two should not share a root CA.
Yes, you can use both standard and SSD persistent disks. If you create a larger instance with more vCPUs and a big enough disk, you can achieve greater than 240mb/s, see the docs:
https://cloud.google.com/compute/docs/disks/performance#ssd-...
A db.t2.small instance should compare to a db-pg-g1-small instance. Pricing is around 90$ on AWS vs 93$ on GCP.
I’m an AWS consultant so I could have messed up on the GCP instance type.