I researched the topic and found 3 ways to implement such kinds of checks - I'm wondering whether you know other ones, or what your experience has been using any of these (or possibly a combination of them):
1) Local(!) Git hooks:
Useful ones are e.g. "pre-commit" or "pre-push". They offer the fastest feedback, but are unreliable, because developers need to install them explicitly. They are also either limited to bash, or require extra tooling to be present on the developer's machine (e.g. a linter). One could do this in two ways:
1.a) Use any tooling installed _natively_ on the developer's machine, e.g. a Python script, or pre-commit.org. Pro: hooks terminate quickly; con: requires installing the tools, version drift may happen (e.g. outdated Python interpreter)
1.b) Docker-based: package the tools into a Docker image. Pro: allows fixing the tooling and auto-update it when needed; Con: requires running Docker, may be slow if a new container has to be created
Slow pre-_commit_ hooks are especially bad for DX, particularly if devs commit often! For pre-_push_ hooks, a slower hook might be less terrible.
===
2) Remote(!) Git hooks, particularly the "pre-receive" hook.
Pro: avoids having to rewrite already-pushed history if any checks are violated
Cons: complicated to set up on some platforms, or sometimes not possible at all (e.g. github.com); limited to bash
===
3) CI jobs: you run the checks in CI as jobs, once the developer pushed the code.
Cons: feedback is provided quite late, compared to local Git hooks; requires dangerous force-push to rewrite history, e.g. if the checks demand that the commit messages have to be rephrased.
Pros: CI jobs are reliable; you can use any tooling you want (beyond Bash), and there are sometimes ready-made jobs/tasks, e.g. https://commitsar.aevea.ee
====
One can of course combine multiple approaches from above, but it would require more maintenance. For instance, you could have a local pre-commit hook or local pre-push hook that checks the branch name and commit message formats prior to pushing them, but additionally have CI jobs that validate the same data. However, there is maintenance effort to ensure that both approaches are in sync (e.g. check the same commit message regex) and do not contradict each other.
So far, the only advantages of Open GitOps over CI Ops, which I understand and agree with, is improved security: it's easier and safer to enable the Kubernetes cluster to access the Git server (e.g. regarding corporate firewall policies/governance, using a read-only access token) than allowing the CI/CD system to access the K8s API server. With GitOps, much fewer credentials need to be stored in the CI/CD system: only those required for CI, no longer those required for CD. Fair point.
On the flip side (w.r.t. security), you have new components (the GitOps controller) which is software, and any software introduces new security risks.
Here are two advantages promoted by GitOps enthusiasts and companies that I don't really buy:
1) Auditing: the Git log collects all changes made to our environment(s). Thanks to the GitOps controller, everything declared in Git is also what has been deployed.
Counter argument to 1) The Git log alone is not indicative at all. You can absolutely NOT claim that whatever is in Git is also what is deployed, because some K8s admin can simply log into the cluster and temporarily disable or even uninstall the GitOps operator. This means that auditability only is given when considering both the Git log and the Kubernetes audit logs. So, how is GitOps better than a (CI Ops-style) CD pipeline? Open GitOps enthusiasts bad-mouth CD pipelines because a privileged admin can simply make changes to the cluster in-between deployments with no one noticing - except that you DO notice it, in the Kubernetes audit logs.
2) Continuous reconciliation: GitOps controllers ensure that the deployed manifests always match what is declared in Git.
Counter argument to 2) That's the whole point of a CD pipeline, too. It does not do it all the time (only at the last "helm upgrade" step), but it does not need to. I would generally assume that you limited the access privileges to the K8s cluster to (capable) admins. It is the job of Kubernetes to do the reconciliation for us (e.g. ensuring that all Pods of a Deployment are up). I really don't understand the use case for the META-level reconciliation (of K8s workloads themselves) that GitOps controllers are doing. Is the assumption that you let every hillbilly into the cluster so that they can fiddle with it and you constantly need a GitOps controller to correct people's accidental mistakes? If so, proper access would be a much easier fix for this.
Finally, there are more disadvantages I see with Open GitOps that are not present with CI Ops:
1) It is difficult to learn, due to lack of best practices, e.g. for structuring the Git repositories. You also need to first choose a suitable implementation (is Werf better than Argo, etc.)
2) When using GitOps, CI and CD is done by two separate systems. If a deployment fails, the team needs to know about it, but they won't find this piece of information in their CI system anymore (which would have been the case when using CI Ops). You need to configure your monitoring and alerting system such that your team also get notified if a deployment failed. This may involve e.g. creating alerting rules for the Prometheus metrics emitted by the GitOps controller.
I'm curious about your stance is on this topic.