It’s so surprising that AWS and Azure don’t have an equivalent to GCP cloud run.
Being able to run a container on demand with a custom image with whatever language and dependencies you want is amazing.
I’ve spun up browsers for snapshotting and seeing a page like a real browser. Cloud run scale from 0 is absolutely magical.
This is what serverless ought to be. Cloud run + firestore makes it possible to build pretty scalable apps with little effort and pretty cheap to run.
I dream of a day when the underlying “cloud” is completely commoditized and interoperable, and your code will run wherever is cheapest, and the cost actually becomes less than self hosting.
For an app with low usage, you start getting a price that can compete with Heroku.
I dont think it's that simple to replace one service with another. It requires a lot of very sensitive dealing with external customers. And one cannot (easily) use the typical "Google deprecation" on a Cloud service offering, either.
So it's complicated. But your impression is not too far from the inclination.
One would assume transition from AppEngine to Cloud Run would be directed by establishing a pricing differential that incentivises moving off AppEngine.
With the FaatCGI paradigm, you have to spin up separate process per simultaneous request.
Web server implementations may vary, but Apache at least allows sending many concurrent requests to a single FastCGI backend process: http://httpd.apache.org/docs/trunk/mod/mod_proxy_fcgi.html
import ( "fmt" "log" "net/http" )
func init() { setupDBConnection(dbName) }
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hi there, I love %s!", r.URL.Path[1:])
}func main() { fmt.Println("Processing your serverless request") http.HandleFunc("/", handler) log.Fatal(http.ListenAndServe(":8080", nil)) }
```
This code snippet is an example of "Run bootstrapping logic once, and reuse it across Min Instances".
Assuming the boostrapping logic refers to the `init()` function that connects to DB. Does Cloud Run looks for init() in a Golang app to invoke?
I cannot seem follow the code's logic and its connection with the stated purpose.
Did I miss anything?
With google cloud run it’s effectively running your container. You can use whatever language you want, whatever apt dependencies you need, whatever Linux flavor (ubuntu/alpine/busybox) e.t.c
In that sense cloud run is the more generic serverless platform. The devs can indeed say “it works on my machine, so let’s ship my machine and scale it up as needed and hibernate when not in use”
The big quesation is how derived we can be of actual value creation before wheels stop spinning...
FWIW, I do wish Google added some way to more explicitly prevent abuse with the containers. I had to take down my Cloud Run containers because people found ways to automate access to my services without rate limits, and Google's solution to that was to use a proxy which sorta defeats the point of using Cloud Run.
There's also a standard quota limit for replicas: 1,000 per Service.
The GCP equivalent to Lambda is Cloud Functions.
As for Cloud Functions and Cloud Run, the way you deploy is completely different. Cloud Run is strictly meant for containers.
> The former is a better fit for your entire app stack (think Heroku) while the latter is for ad-hoc.
There's no product reason why App Engine couldn't start deploying custom containers.
> As for Cloud Functions and Cloud Run, the way you deploy is completely different. Cloud Run is strictly meant for containers.
AWS Lambda (arguably the "inspiration" for Cloud Functions) just announced support for custom containers. Cloud Functions could too. And then you essentially have Cloud Run.
This isn't true. App Engine Flexible is in fact very similar to Cloud Run, and my understanding is that it runs on the same infrastructure. In fact, App Engine Flexible "Custom Runtime", which lets you load a docker container in App Engine, is very similar to Cloud Run.
FWIW, though, as a user of App Engine Flexible for Node, I'm very glad this Cloud Run option now exists.
Whether that's your "whole stack" or something "ad-hoc" is a totally arbitrary distinction. Your "whole stack" will almost certainly involve external concerns, and your "ad-hoc" concerns will almost certainly grow in complexity until they converge on the same spot.
AppEngine is also running containers via gvisor.
The distinction is being driven by PMs and marketers but not by the needs of customers. What end-users need is one well-supported tool, not 8 separate tools with uncertainty about which one will receive future support and matrices and flowcharts to decide between them.
Oh and if you used AppEngine Standard, you can't move to Flex as Flex came with a whole new set of libraries. And Flex to CloudRun didn't exactly have a seamless migration.
Along the way they changed multiple internals with deprecation causing code base changes. Such-as Memecache, ImageAPI, Monitoring price changes.
I can't imagine any enterprise who's used the ecosystem continuing. The sheer number of architecture impacting changes leads to huge operational overhead costs to maintain applications running on this infrastructure.
The technology of a vendor is way less important than its culture. So far Gcloud seems to understand that. Maybe the App Engine team is outside that org
Cloud Run for Anthos is a different product, which is the OSS Knative code running on GKE.
Yes: https://knative.dev/docs/serving/autoscaling/scale-bounds/#u...
I also wrote about it: https://livebook.manning.com/book/knative-in-action/chapter-...
Although may be there is a way I don’t know.
Knative in your own cluster supports it but I want scale to zero instead of paying for a cluster.
They say that it supports sockets but not bi-directional. You can't send messages back up the socket which we're using for GQL mutations