Most dependency management tools have some kind of hacky support for using S3 directly.
Full fledged artifact management tools like Artifactory and Nexus support S3 backed storage.
Interesting to see that the pricing is approximately double that of S3, for what I imagine is not much more than a thin layer on top of it.
There's a lot of necessary complexity in the backing platform. Encrypted package blobs are stored in S3 but there are a bunch of other distributed systems for doing things like package metadata tracking and indexing, upstream repository management, encryption, auditing, access control, package manager front-ends, etc... that are not immediately obvious and add cost. The platform that backs CodeArtifact is far from what I'd call a thin layer on top of S3. There is also a team of humans that operate and expand the platform.
Source: I lead the technical design for the product as well as a chunk of the implementation but left the team around mid-2018.
Honestly the fact that they only support javascript, Python and Java is pretty bare bones compared to what the others on the above list support, and again as you say, for a fairly high price.
All I'm seeing this does is give the proper http endpoints so you dont need the wagon. Is it worth ~2x the price, no, but it's better than the other enterprise-y solutions.
Haven’t looked carefully, but is there a difference in the guarantees it provides? Might be a performance or SLA difference.
GCP has a similar offering[2]. And GitHub[3].
[1] https://docs.aws.amazon.com/codeartifact/latest/ug/python-co...
It is common for developers to use Git to store source code, in a hosted service like GitHub. It is common to use SSH keys to access Git. Frequently those SSH keys are generated without passphrases. Those are non-expiring credentials stored on disk. If HTTPS is used to access Git, it will likely be with non-expiring credentials.
I'm not saying short lived credential are bad, not at all. I'm pointing out how this service differs from similar services, requiring a change it workflow, which might be annoying to some people. Not everyone is operating under the same threat model.
For the specific use case of the developer box and the Docker registry, resetting the credentials every 12 hours doesn't offer any more security than not on its own.
The reason for that is after you try to login to ECR after the expired time, the way you authenticate again is to run a specific aws CLI command to generate a docker login command. After you run that, you're authenticated for 12 hours.
If your box were compromised, all the attacker would have to do is run that aws command and now they are authenticated.
Also, due to how the aws CLI works, you end up storing your aws credentials in plain text in ~/.aws/credentials and they are not re-rolled unless the developer requests to do so. Ultimately they are the real means for Docker registry access.
If you care about the security of these artifacts, why is their home directory (or their full disk) not encrypted? If they have access to the repository, they probably have artifacts downloaded on their laptops, so if the laptop is compromised, the artifacts are compromised anyway.
Edit:
Not saying temporary credentials are bad. But the reasons you gave seem a little suspect to me. A better reason is that you don't have to worry about invalidating the credentials when an employee stops working for you.
Also the lack of nuget is a major issue.
Out of curiosity, what would you want from this service for the "plain binary" use-case when S3 already exists?
Are others not packaging their code in intermediate packages before packing them into containers?
We end up having a lot of libraries that are shared across multiple images.
For people that disagree with this model: where do you think the the software comes from when you apt/apk install things inside your Dockerfile?
Whenever I see a product announcement like this missing something I need to use it, I immediately ping our Technical Account Manager to get the vote up for a particular enhancement.
Source: I lead the technical design for the product as well as a chunk of the implementation but left the team mid-2018. I don't have any specific insight into their plans, not that I could really share them even if I did.
The git server you use supports artifacts already. You could also just put all of your artifacts on an S3 bucket if you needed somewhere to put them, which is exactly what this is but more expensive. I don't understand when this would save you money or simplify devops.
Skim the docs and you will see it is not “just S3”.
A lot of people won’t find this useful but for some it’s a big blessing.