Extra fun thing to do: print a message describing why the decision was made and how to resolve the error in the failure message of your test!
[0] - https://docs.bazel.build/versions/main/be/shell.html#sh_test
The canonical example in my mind is any kind of autofix-able linter, where there's some kind of patch (or more nuanced autofix) that the linter can generate on-the-spot for you. With a sh_test construct (or any other test), you generally find yourself printing out some command that the user can run to fix things, which in a sufficiently large codebase can get really frustrating. (The worst offenders of this that I remember dealing with at Google were when generated code/artifacts had to be checked into the repo instead of getting being wired into the build system because <insert hermeticity problem> and a large-scale change you were doing looked at one of them the wrong way...)
(My company - https://trunk.io - is actually building a universal linter as part of our product offering, and we already have a system to write custom linters with varying levels of sophistication that can plug into both your IDE and CI system!)
Been there. It's unfortunate because there are `genrule()`s for this very reason. But, definitely. Linting is very important and trunk looks great! But linting and config validation are very different use cases. At a previous company I had test that loaded a set of configs from repo and made sure all of them were valid with app level logic. It was crude but helpful. Trunk would be ideal.
Also, question, have you thought of writing your own linter+autoformatter with treesitter? Are you planning to expand into other tooling?
I’ve yet to see a good answer to this problem though as each approach has upsides/downsides and no one ever ends up happy in a large enough team. Good luck.
I use tools like `jq` [1] or `yq` [2] all the time for CI checks. One useful check, is we have a configuration file stored as several hundred lines of YAML. Its a nice thing to maintain a sorted order for that, so we have a git pre-commit hook that runs the following:
> yq eval --inplace '.my_key|= sort' my_file.yaml
Of course, a pre-commit hook or CI both work. There's pros and cons of both. For our team, the pre-commit hook is a low enough level of effort, and doesn't require a CI check for something that executes in milliseconds.
[0] https://stackoverflow.com/a/1732454
"Too many exclamation marks"
> Three shall be the number thou shalt count, and the number of the counting shall be three. Four shalt thou not count, neither count thou two, excepting that thou then proceed to three. Five is right out.
!!41 === true
!!0 === falseI recommend git grep, which is comparable in speed to ripgrep, since it ignores non-tracked files and searches the object storage directly. It is also able to run in parallel.
For Android specifically: Gradle and Android Studio both support a powerful linting framework (to the level that it can provide auto-fixes to the IDE). It's better to provide an in-editor to guide your contributors before it hits CI, then have CI nag if they didn't fix the in-editor warnings/errors:
Some examples of custom lint rules[1] and the default rules which Android Studio runs[2]:
[0] https://android.googlesource.com/platform/tools/base/+/32923...
[1] https://github.com/ankidroid/Anki-Android/tree/master/lint-r...
[2] https://github.com/ankidroid/Anki-Android/blob/master/lint-r...
A CI config is just a big job that invokes a bunch of tools like "go build" or "npm test", which is exactly what a shell script is too. I would get rid of the YAML and use shell for the whole thing :)
Shell does have some problems, like needing to be more parallel, incremental, and reproducible, but the YAML-based CI systems all have those problems too.
Related: http://www.oilshell.org/blog/2021/04/build-ci-comments.html#... (and the entire post)
No reason to wait for the CI. There's also the risk of broken intermediary commits unless the CI check each and every commit in isolation.
It denies all pushes which doesn't meet the specified requirements with an error message, and the end user has to re-do the commit and push again.
This is likely how your authorization is done today already.
The alternative would be a pre-commit hook, but that runs when crafting the commit, and under the control of the end user. That can make for a better user experience since it runs even earlier in the process but isn't necessarily secure. Of course, one can have both.
Disclaimer: I work on trunk :)
Edit: ah.. forgot the whole point.. In Next Generation Shell (author here) this doesn't happen: ngs -e '$(grep ...).not()' does right thing. grep exit code 0 becomes 1, 1 becomes 0, 2 becomes 240 (exception)
I’m also using grep in CI as a simple but effectieve check on dependency violations in some codebases.
For example you could configure a Docker Compose health check to curl an endpoint for a 200 in production but in development override the health check command to call /bin/true so you don't get log spam in development from your health check while ensuring the health check still passes since /bin/true is an extremely minimal command that returns an exit code of 0. This can be controlled by a single environment variable too, no changes needed to your docker-compose.yml file.
I covered this pattern in a recent DockerCon talk at https://nickjanetakis.com/blog/best-practices-around-product....
(grep -q notwanted /hay/stack ; test $? == 1) && echo "No violations found"
But you're right...the simpler format would do the wrong thing if, for example, the file didn't exist or had bad permissions or was a directory instead of a file.Consider what would happen if your unit test framework couldn’t compile the test files and failed with an error. Isn’t that also a test failure?
To address this: many CI pipelines allow you to specify which exot codes to succeed and fail on per job, or if it doesn't the one liner will need some boolean logic to check the exit code.