The key idea is that you build a CLI using our SDK (it's a container) and then we make it instantly accessible to your whole team in Slack, while also providing 12 factor principals such as Secrets, Configs, Logs, Events and Metrics.
We are early on but have a very vibrant and growing Slack community of developers who want to build their own Cloud PaaS using AWS, GCP, Azure etc but want an easy to use DX so they can distribute there knowledge and process to others and save themselves tons of time and context switching.
Please AMA, we'd love to hear your thoughts.
Now, aside from the implicit risk that this might offer, if Slack acts as the "Control plane" for DevOps, what's the platform's "Data plane"?
Is it right to assume that in order to be independent from Slack we'd need to rely on CTO.ai's own infrastructure and resources? I.e. the data plane where DevOps "effectively happens"
We let you aggregate events and process metrics, should you want to use our platform as the way you aggregate event driven workflows or calculate delivery metrics.
You can run your entire workload on our serverless environment or you can push your workloads to your existing CI/CD + cloud computes - it's really up to you!
You don't want two separate run books, as they'll never stay in sync. It'll already be enough of an efficiency hit just switching to an alternative interface when Slack is down, let along switching to different tools/processes.
You have the freedom to build your DevOps workflows as you please, by leveraging our SDKs to interact with the tools and platform you use daily for DevOps. The Slack interface allows you to customize the inputs and trigger these workflows as you desire, but--once triggered--you will not necessarily depend on Slack being up. Hope that makes sense!
Also, the workflows you automate using our platform work in the CLI as well. So you build once, you get a containerized automation which you can use either locally in your CLI, or in Slack. For running these locally, you have to install our `ops` CLI on your machine (more on this here: https://cto.ai/docs/start-building-ops).
Let me know if that helps clarify your questions or if you have any other questions. Thanks!
The way this works is that you first build a CLI, which is a container and then you publish it to us so that it also runs in Slack. It's shared code that runs in both environments.
So if Slack is ever down, you still have the CLI.
The big benefit here is that you build once and it runs in both places which means don't need a separate runbook or even infra to make your CLI run in Slack - it's 100% serverless.
For our users the Ops engineers obviously prefer the CLI, but the Dev's really love Slack - this creates a nice balance in the DX that works on both sides, on top of existing CI/CD.
With CTO.ai your existing CLI "just works" in Slack and you have none of the additional overhead but get all of the benefits of chat such as distribution, transparency and collaboration as well. It's a big paradigm shift.
Many of our users are going all in because of this!
Thanks for your feedback! We'll keep iterating!
Currently, our teams are a basic ACL that lets you manage who in Slack can run the workflow, but we also allow you to associate any of your teams to private channels, so you can more granularly control access using Slack's user membership.
We're looking at more RBAC on top of this for enterprise but for most smaller teams, this covers the majority of their use cases. They often create a team for ops and a team for dev and then associate these teams to members + channels as needed.
I certainly have a soft spot for the K8s ones especially as I helped build them :-)
Especially that lately Slack has proven to be so unstable (lots of intermittent downtime). Meaning I would just lock my infra automation code in CTO.ai servers for... what?
Care to elaborate for the skeptics reading this?
Investment <> Secure
I watched the video, you guys are centralizing all the team's scripts inside Slack. I can see a lot of Slack addicts loving this!