Is anyone else a little annoyed by the messaging here, I read it as, "We think something bad happened to your ultra secret data, but we don't know, so we're asking teams to spend potentially hours or days fixing things while we aren't really able to tell you if your stuff was actually compromised"?
What I find more troubling is, if they don't quite know what happened, or aren't telling us, and we do the work to change everything, how do they know it won't just happen again in the next day or so and people are still accessing our systems, where is the details?
> At this point, we are confident that there are no unauthorized actors active in our systems.
Confident isn't really a good enough word to use here in my opinion. We've just blocked Circle CI from all our systems for now until we hear more, likely start to move to another build system.
I know accidents happen but this is likely the beginning of the end for our teams relationship with Circle CI. Trust has been broken.
At the risk of sounding pedantic, but this is why you have everything as IaC. These kind of changes should not cost days. It should take merely minutes or an hour tops to change all your keys. It should be trivial, for cases just like this.
It's not Circle's fault people didn't do things propertly, but I think they just owe us a better explanation.
In the end your credentials need to outlive your CI/CD actions.
From experience, be careful and ensure you properly scope your OIDC connection. It’s very easy to allow ANY GitHub repo with proper OIDC connection bits (SA email, connector pool, etc) to get an OIDC token, rather than what you expect, whether that’s any repo in your private org or a specific single repository. As always, RTFM
Even if the attacker had access to env vars from running jobs (which includes the signed token needed to do an OIDC role assumption), those tokens have a short expiry time, and even if an attacker stole that token and performed a role assumption then that session can only be valid for a maximum of 12 hours in AWS, and then you know the attacker is out of your account.
It just significantly reduces, practically nullifies, anything that an attacker can do in your AWS account.
>I've been investigating the use of a @ThinkstCanary AWS token that was improperly accessed on December 27th and suspected as much.
[1]: https://circleci.com/blog/ceo-jim-rose-email-to-circleci-emp...
[1] https://circleci.com/blog/an-update-on-circleci-reliability/
To give credence to this, a gitlabber spoke up in that thread, said it was a serious thing and they deliberately had no third-party stuff on their site for that reason.
And I just logged into Circle today, and use the Safari network inspector to see what JS it loads... and it's still plenty of third party stuff that I can see:
* Amplitude * Segment * cci-growth-utils * Statuspage * DataDog * HotJar * Pusher
Not sure if this is an issue, but it doesn't make me comfortable.
we need to rotate:
- secrets in context environment variables
- secrets in project environment variables
- project deploy keys
- circleci api tokens
then we have to go back and look at all audit logs for... basically everything... and try to find something that looks weird. :/Fun night when you need to reroll your credentials...at least it's nice to have a list in the CircleCI UI, but sucks when you need to make sure that you have all of the scopes available to you.
On the other hand, I'm becoming increasingly weary of putting all my eggs in the Microsoft basket if move our source code, build system, dev environments (codespaces) to GitHib, is it just me ?
edit: I sincerely think this should be bumped, given how many folks don't seem to be getting the news here in a timely fashion.
If you do your shit right, you can just dump most of your secrets into some Contexts- containers of env variables- and apply them. Then when this stuff roles around, it's easy to update everything centrally; change the context & everyone sees it. We, alas, can't easily do that, since we have so many differing env var names. New Year, new fun!
But one still has to update their credentials on any downstream service, e.g. Third-party API keys. In general, this is highly individual for each service, and can mostly only be doneanially.
I guess the answer is, why on earth am I still using Circle CI....
Thankfully all of my secrets/env variables are just dummy data for tests, and already using OIDC
https://github.com/rupert-madden-abbott/circleci-audit
It can: * List env vars attached to your repos and contexts * List SSH keys attached to your repos * List which repos are configured with Jira (a secret that might need rotating)
Circle CI have also released something similar [0] linked to near the bottom of their blog post[1].
[0]: https://github.com/CircleCI-Public/CircleCI-Env-Inspector
[1]: https://circleci.com/blog/january-4-2023-security-alert/
The blog post calls out "environment variables" and "contexts"
"Thank you for contacting CircleCI Support.
This does also apply to SSH Keys, as such we do recommend to rotate SSH Keys as well as to take extra caution.
If you have any other concerns please reach out."
When their computers are compromised, by internal or external crooks, the crooks have full access to your code, and - in some cases - your data. If they wanted, they could inject their own shit into your binaries, totally ruining your reputation.
As a bonus, you get to pay a premium!
I still compile and test my code on my own machines, in my own network. It's much faster than CircleCI, cheaper, and it's ∞ safer.
[0] https://cloud.google.com/blog/products/application-developme...
[1] https://slsa.dev (edit: fixed this link)
I can see why you would use GitHub actions if you already host your code there, but I don't feel comfortable sharing my signing keys
[X] In case of a "security incident", lock down my account until I take action.
I understand why they can't do that by default, but it's crazy that every time this happens, I have to run in order to secure my assets when in many cases, I'd be perfectly fine with things just shutting down until I have time to take care of them.Better yet, also give me a button that does this even when there's no official incident reported. That means disabling all access tokens, resetting the password, halting any scheduled jobs, and revoking access for any connected OAuth services until I manually re-enable them.
Sounds like a separate product (something about breaches and blast radii) and not a CircleCI feature.