That said, where I work (MaaS Global) we have a production PostgreSQL database hosted on AWS Relational Database Service (RDS):
https://aws.amazon.com/rds/postgresql/
We connect to the AWS RDS instance in our lambda functions using an ES library called knex.js and some environment variables to store the DB credentials:
It's been a problem for years and there's been no sign of a solution. Example article from last month: https://medium.freecodecamp.org/lambda-vpc-cold-starts-a-lat...
These are the sorts of problems that turn people off from using serverless architectures.
For the DB connection you put the lambda in the same vpc that the RDS exists in. Then you open the connection pool and reuse it if its active. Not that a new connection is a big overhead over leveraging an established socket.
Wonder where all this misinformation is coming from on lambda DB access issues.
1. This doesn't work if you were actually trying to build your API as microservices. You might have 60+ functions, some which call each other, and keeping them all warm is not really a good option.
2. Keeping a minimum number of instances warm fails to account for half the point of using serverless architectures: being able to scale. Sure, if you have little to no traffic, you can keep a couple instances warm and be up, but if your app needs to scale to 5 or 10 or more instances to handle bursts of traffic, the surfers who hit that cold start end up dealing with an extremely bad experience.
More importantly, as Lambda gets more popular, uptime pingers get less and less useful because of tragedy of the commons. The reason for needing cold starts at all is that AWS is rotating out instances to be able to keep up with overall demand with limited resourcs. If only a few people are sending heartbeats to their instances, their instances stay in rotation because other people's get rotated out instead. If everyone is sending heartbeat requests, some of them will still end up getting rotated out, and therefore everyone will need to increase the frequency of the heartbeat requests to keep their functions warm. It's not a sustainable solution, and I'm baffled that AWS tacitly promotes it as a resolution to the problem they themselves have caused.
It's been years. AWS needs to fix Lambda VPC cold starts.
Here's the data model part of my todo app if you want to see queries in the app: https://github.com/fauna/todomvc-fauna-spa/blob/master/src/T...
Those options work fine, if you were OK with using a NoSQL DB. But what if you wanted to use an actual relational database? For that you pretty much need Lambda in VPC, and it's not really usable because of the cold start issue.
At some point Amazon will release Aurora Serverless[1], giving a serverless option for an on demand relational database. Will that work somehow with Lambda without needing VPC, therefore defeating the cold start issue? What cold start issues we'll it have itself? I guess we'll wait and see for now.
Tried FaunaDB few month ago the latency was beyond 200ms for a simple a read , and beyond 600ms for an insert.
Would not recommend it at this point.
And I was complaining about 500ms cold start times on Firebase Functions.
I think I'll stop complaining now.