How much time a developer will think about the right instance size, fine tuned IAM permissions, naming conventions or tags? I would wager a substantial amount that most will just copy-paste and get done with it, which eventually leads to the creation of a devops/sre/platform team to do the refactor and cleanup and we are back to step one.
Another poster touched on another important point: it's important for this to be changeable independent of the code. The reason for this is actually kind of subtle. Obviously, you don't wanna have to need to rebuild in order to regenerate permissions. But the real reason, imo, is that it should be easy to parse for a human, easy to locate for a human, and also easy to parse and adjust for a machine, that might determine a permission is no longer necessary, or who is trying to build a dependency graph in order to determine who to wake up during an incident. That means it should go into configuration that is versioned and deployed alongside the code, but not in the code.
If you make this hard to understand and change, people will just copy it, and the you're back to square one. It's gotta be the easiest thing to do the right thing, because at scale, people are gonna do the easiest thing.
I feel like I'm kinda going on at length about this, so instead I'm gonna leave you with a link to a blog I wrote about the same concepts, if you wanna read more. It's about Kubernetes network policies, but really the same concepts apply to all kinds of access.
https://otterize.com/blog/network-policies-are-not-the-right...
const imagesBucket = nitric.bucket('images').allow('read', 'write');In a way, the public cloud and their APIs are what zero trust is to classic corporate moat networks. You used to be safe 'inside' your code, and now you're not. On top of the security issues, you're now also getting real time billed for your mistakes.
This is of course where platform engineering (and the gradient of development and operations) comes in: give developers the self-serve resources they need so they can own their stuff. That also means safe-to-consume pre-made sets of cloud resources. That way you're not expecting everyone to bear the same additional cognitive load, while still having major benefits from the resources that are available.
Essentially the same thing the industry has been refining since the time of sending a deck of punchcards to the mainframe operator who will deliver the results of your batch compute once it's done. No cognitive load added, only the job at hand on your mind.
Just design your infrastructure around it. Segregate data using AWS accounts into granular "buckets".
> The point here is that while Terraform and similiar technologies have existed for the past decade, providing Ops and Infrastructure teams with the tools they need to cleanly document their infrastructure, application development teams have had access to no such tools in this time. This is what the emergence of IfC is all about.
What does a company structure look like where "application development teams" are not allowed to access the Terraform code? It seems like Nitric is a wrapper around Terraform, which is fine and I can see how that could be useful, but the framing of the article doesn't make any sense.
Nitric may work if you fit within its guardrails or golden path, but the stuff I work on never does.
i feel like simplyifing is great for early stage, but will be a tedious pain to scale out when you need to
For example, instead of AWS client libraries, environment variables for ARNs, etc. existing in the code you instead have 1 line defining a resource using the SDK. That other code is separated into the provider, enabling testing in isolation and separation.
...Don't they have access to the same tools, though? And if they don't, it sounds more like an internal policy issue or operational inefficiency of whatever org they're in? Terraform is Terraform for developers.
Besides, I believe the promoted approach is fundamentally misguided.
Application code has no business getting involved in the infrastructure it's running on. They layers of abstraction should be separated.
It's fine if your team manages their own Dockerfiles, k8s schemas, helm charts, terraform/CF templates or whathaveyou. Cohost them in the same git repo even, if you must. Use all the platform-specific APIs and integrations you want.
The application itself should be competetely agnostic to all of that, though, and has no business interacting with any of those APIs. You use common interfaces like environment variables, filesystem, and CLI flags to communicate between upper and lower layers. You do not have your services go and read k8s secrets over API, interact with the container runtime, or make direct API calls to ECS[0], for example. You keep those layers abstracted away from application code. That has nothing to do with the "who".
On a language level, there are _very_ good reasons to prefer a declarative DSL over any turing-complete general-purpose language. Not saying you should use HCL specifically but scripting your infrastructure provisioning in JS doesn't seem like a step forward compared to ye olde bash scripts...
[0]: I'm sure you have a valid counter-example. The point still stands in general.
> IaC tools like Pulumi, Terraform, AWS CDK, Ansible and others bring repeatability to infrastructure by giving you scripts that you can use across different environments or projects to provision infrastructure. This code/config is in addition to your application code, typically with a tight brittle coupling between the two.
If there is a tight and brittle coupling, this is not an issue stemming from either the IaC approach or any of those tools. It's an issue of poor design.
What am I missing?
[0]: Props on great examples repos, fwiw: https://github.com/nitrictech/examples https://github.com/tjholm/multiboy
Resources are documented using abstract cloud concepts in code, and then provisioned by a deployment engine. The nitric deployment engines are actually build on IaC such as Pulumi/CDKTF.