As a guy who's done a lot of programming and a lot of technical writing, it's clear that this is the result of a TON of work. It is a model of clarity, well-formatted, and explained with just the right level of detail. It is completely pro quality and OP should be super proud of this body of work.
This isn't just warmed-over Amazon docs. It's just what you need when you can't figure out what the docs are saying and you want to get something done now.
For example, you can both create a lambda and EC2 infrastructure using Terraform. However, once you have created a lambda function, little maintenance is required going forward, unlike EC2. EC2 instances require continuous patching of the operating system, managing disk space (/tmp filling up over time), and you have to plan for an EC2 instance to be taken out of service if it starts failing health checks.
I'm poking fun at Amazon's position...
"AWS Lambda automatically runs your code without requiring you to provision or manage infrastructure."
Using a declarative configuration tool to specify the infrastructure one wants to exist is not really "managing infrastructure" as we used to do it.
So 66+12+27+47+40=192 of terraform in a BUNCH of files.
Compare that to 47 LOC of verbose AWS SAM YAML: https://github.com/kaihendry/count/blob/sam/template.yml
All in one file.
SAM generates CloudFormation templates/stacks, which create your AWS resources. Serverless generates CloudFormation templates/stacks, which create your AWS resources.
Related: I wouldn't recommend deploying raw lambda's without a framework. The development workflow really sucks without one. Use SAM or Serverless.
Nope.
The reason I'd be reluctant to use it without thoroughly being it is that I don't like using any 3rd party monster suite of modules. I've been bitten by that a few times. You're at the mercy of the 3rd party to continue updating and supporting the modules. On top of that, it may not support a particular user case you want to implement. And it's almost worse if it does, because that likely means that it's no longer a simple wrapper around raw terraform resources, but a huge complex beast.
In this case, I don't think the author necessarily expects the user to directly user his code. It is a set of examples. The modules themselves are very simple (really just one or two raw resources with vars and outputs)
It's a set of custom resources plus some CloudFormation macros that expand to CloudFormation templates.
It's also a bunch of scripts and utilities that help with the packaging and deployment.
At the end of the day you can use it to generate the final CloudFormation template and run that manually, if you want.
But also, maybe even important, it makes things less transparent. I want to be able to refer to the original CloudFormation docs (even when using Terraform) without "translating" every time.
The verbosity is frustrating, but largely ignorable. Having to go back to string references of AWS references defined in different sources of truth is a maintenance nightmare, IMO
Earlier this year I tried to use Terraform for everything, using principle “everything is a resource” (everything in my case is AWS, Datadog and Snowflake), so adopted “terraform apply” as universal deployment interface. Like if we need a ECR task and a Docker image, build the image from within Terraform (using null_resource which runs “docker build”). This approach works for everything but Lambda as Terraform requires a pointer to source code bundle at the plan stage. After unsuccessful fights I gave up for Lambda, so I build bundles prior to “terraform apply” (using “make build”, where build target does its magic of zipping either Go binary or Babashka Clojure sources).
That approach scales well for already two dozens of Lambdas and counting. Ping me if you want more details.
——
I disagree with this tutorial about tendency to use Terraform modules per AWS service, hiding well-documented AWS resources behind the facade of module with custom parameters with long names.
My rule of thumb is: only use a module if you want to enforce a very opinionated way to create resources across multiple projects. Most of the times you don't want that.
Regarding lambda, I also run a "make build" before "terraform apply". After all terraform is not a build tool. But I do make the zip with data.archive_file so that I can pass the output_base64sha256 straight to the lambda resource.
Your comment made me think of trying to skip bothering with Lambda’ hash sums and use custom refresh triggers instead, initiated by null_resource. Will do after holidays.
[0] https://registry.terraform.io/providers/hashicorp/archive/la...
seconded
Deployment of a service is rarely in practice just deployment of new code to an already provisioned lambda - because that lambda can do nothing in isolation. Instead, it tends to be the lambda alongside an SQS queue and a trigger, and an S3 bucket; or an API Gateway that links an authorizer to the Lambda's code. Because of that, evolution and development of the application tends to require evolution and development of those surrounding infrastructure pieces in tandem.
As a result, managing the infrastructure of your serverless service is often most naturally done alongside the application code itself - indeed, the distinction becomes somewhat meaningless. That also means the engineers developing the service require the ability to own and operate the infrastructure as well. That may or may not be well served by Terraform. It's a tool I absolutely love for mutable, stateful infrastructure, but something like the Serverless Framework or AWS SAM can be a much lower-friction and more natural fit for serverless work.
I often hear folks complain that Kubernetes is complex, and hard to understand. We've done a lot of work to make the experience of deploying functions simple on K8s, with very little to manage. But it still costs you a K8s cluster - most of our users don't mind that, because they have one anyway.
But, for everyone else we created a new of openfaas called "faasd" which can run very well somewhere like DO or Hetzner for 3 EUR / mo. It doesn't include clustering, but is likely to be suitable for a lot of folks, who don't want to get deep into IAM territory. https://github.com/openfaas/faasd
The REST API for OpenFaaS has a terraform provider that works with either version - https://github.com/ewilde/terraform-provider-openfaas
And there's a guide on setting up faasd with TLS on DigitalOcean, it took about 30 seconds to launch, and makes building functions much simpler than Lambda. "faas-cli up"
https://www.openfaas.com/blog/faasd-tls-terraform/
If I were going to use Lambda, the author's examples are probably what I would turn to, but there are other ways and it needn't be this complex.
And I will continue to be sad that all of the higher order first party tooling is ultimately going to be based on CloudFormation (looking at you, https://aws.amazon.com/proton/).
Ultimately, after having used Terraform to manage function code bundling/deployment and skipping Terraform completely for the lambdas, I think Terraform does best when it manages the infrastructure lifecycle for lambdas and nothing else. You can then rely on more competent tooling for deployment.
If you are all in on AWS its a no-brainer.
I’ve not yet had to juggle changes to the live lambda but I put in the bits to make it stamp out an alias so I can start using specific aliases to ensure new versions don’t automatically get picked up by downstream invocations. All of that has yet to be put in to practice for real.
If you only want to change lambda's configuration without touching the code, Terraform will do exactly that since the source_code_hash hasn't changed.
Opposite is also true.
Can you expand on this?
I look forward to trying this out the next time I want to prototype anything with some lightweight lambdas behind it.
The current team does their deployments in 2 stages, first the use chalice to generate the correct terraform files for the 2 lambda's that are deployed one the indexer, the other the service api, then the generate all the other terraform json files using python. After this is complete the infra is deployed in 1 step.
The second team has way less errors on deployment, and managing the infra is way easier, esp if you need to nuke it all and rebuild an environment.
I would break out the terraform backend on a per-environment basis rather than clumping them into folder/bucket, that can be dangerous if the bucket gets emptied.
I'll just point out that if you're using Python, Chalice is excellent and is able to emit Terraform code for all of its resources (https://aws.github.io/chalice/topics/tf.html).
I'm not sure how it could work for usage-based resources like Lambda/S3, maybe just assuming minimum usage for each resource is good enough to provide a rough monthly estimate? e.g. 1M Lambda requests, 1 GB storage and 1K S3 requests, then let users customize those numbers if they care to find out more?
Particularly the Serverless framework will save you A LOT of boilerplate IAC regarding all the event-driven integrations Lambda currently has to offer, and manages the full build and packaging cycle with the relevant language specific plugins, e.g. for Node or Python.
Others are the CDK, SAM or ARC.
For sure stacks created with those frameworks can use resources that are created by Terraform in other stacks. Serverless for example has its own integration with the Parameter Store for encrypted variables, so that there’s no need to use environment variables in that use case. Terraform would then create the Paramter Store keys which SLS could reference.
I can't compare to TF as I have no experience with it, but I would assume CDK is better just because you can use a well established language (instead of some custom lang that needs to be learned)
Informative article though.
Not saying it's bad, it just looks very different to those tools I mentioned.