This superior (sic) is what a negative productivity employee looks like.
It might also be a communication issue if the team doesn't challenge the status quo, because this situation is terrible:
> For any ‘exception granted’ we would have to book time with them days in advance then white-board the reason why Sonar is wrong
(no evidence for that, maybe they did challenge it and the superior refused, but I just wanted to mention it)
The only workaround I've found is to create a new function, fill it full of many useless no-op lines, and write a test for that function, just to bump the percentages back up. This is often harder than it sounds, because the linter will block many types of useless no-op code. We then remove the code as part of another ticket.
And Sonar is far from being alone in this. JIRA is the most glaring example I can think of. Growing companies implement cargo-culted tools without understanding the needs and requirements, and let themselves drift into templates or "best practices" that are not relevant or beneficial to their own operations as-is, resulting in a sum of frustrations, whose impact on the work and the teams they acknowledge only way too late.
The care you need to inject not only in your tools, but how they are apprehended by both your customers and their primary users (which may have very different, if not opposed, perspectives on how/why to use it), from pricing, to documentation, to use-cases...
This is especially very complex when your tool answers to a regulation requirement, because it's very often received as a constraining/oppressing "solution", rather than an enabling one: it may be confortable to you as a seller, and confortable to your customer, but it may also be a counter-sale point to your (customer's) users that will impact future consideration when they become purchasing agents themselves.
Some tools bias people into doing bad things. It's not exactly the tools fault, and they may even have good uses (like Bash linters), but tools guide people and it's good to remind people not to follow.
The process of reality getting abstracted to the point abstractions stop being representative of reality is extremely common in software engineering.
https://en.wikipedia.org/wiki/Hyperreality
A hyperreal workplace is one where representations of reality take precedence over reality itself, and the reality of a situation ceases to have meaning. i.e. One where people chase metrics for the sake of metrics, instead of understanding that issue count is supposed to reflect the underlying code quality, and code quality should always take priority over the representation.
The broader philosophical context is relevant because it shows the broader cultural problem instead of assuming the issue is limited to a single tool.
Because, rather than recognize that overdoing abstractions is a thing and reminding people that there is always reality out there that won't bend to your wishes, post-modernism (which is the term the GP used) tells people that there is no reality out there, it's all just human-created abstractions, and anyone who tries to push back because reality is just insufficiently post-modernist.
His pattern for giving a fuck was predictable. My strategy was to hold certain tickets in an undead state (not impacting the metric), then reopen them and close them, demonstrating a metric improvement.
He got his improvement and big shot street cred, users weren’t impacted, and I didn’t have to ruin support to try to grind out small gains.
Just completely impenetrably baffling to me in a way that other fields like chemistry or microbiology or physics or whatever (despite also not being my own) aren't. Not that I understand them, but they're penetrable, I can read more and more and form some kind of understanding.
Is it just me? I don't know what it is, can it really be as simple as philosophy not being taught at school (compulsorily, or young) so I don't have that kind of rough overview of the landscape I do for other broad subjects? (I did take one course in 'contemporary philosophy' at university, which I enjoyed, but we covered only what we covered I suppose - I might be able to hold a (very) basic conversation about Sartre or Wittgenstein, but that page on post-structuralism.. no idea!)
I am not saying it's snake oil, but honestly how i ve seen ut being used, it's not that far
I've found the majority of its suggestions helpful, and the ones that are not I simply ignore.
That's the real time sink - figuring out how to get past it. It's a lot more than 2 minutes, sometimes even days if it's something you can't work around and have to go through the red tape if your team isn't empowered to take charge of your own pipelines.
# noqa: F401// Nosonar
- We support hold-the-line: we only lint on diffs so you can refactor as you go. Gradual adoption. - Use existing configs: use standard OSS tools you know. Trunk Check runs them with standardized rules and output format. - Better config management: define config within each repo and still let you do shared configs across the org by defining your own plugin repos. - Better ignores: You can define line and project level ignores in the repo - Still have nightly reporting: We do let you run nightly on all changes and report them to track code base health and catch high-risk vulnerabilities and issues. There's a web app to view everything.
Try it and let me know how it goes. https://docs.trunk.io/check/usage
Initially, people always come out of the woodwork insisting that the gate requirements must be hard blockers and that we can just hand wave away the issues OP listed by tweaking the project rules. I always fight them, insisting that teams should be the owners and to gain quick adoption it should just be considered as another tool for PR reviewers. Eventually, people back off and come to accept that Sonar can be really helpful, but at the end of the day the developers should be trusted to make the right call for the situation. It’s not like we aren’t still requiring code reviews. I feel for OP, but it’s not Sonar’s fault the tool is being used for evil instead of good.
This last time I implemented SonarCloud, I took an anonymous survey to get peoples opinion. For the most part people liked the feedback Sonar provided. More junior engineers and more senior engineers liked it the most- midlevel engineers not so much. The junior liked getting quick feedback prior to asking for code reviews. The more senior engineers - who spend a lot of their time doing PR reviews - liked that it handled more of the generic stuff so that they could focus more on the business logic or other aspects of the PR. It’s just another tool in the toolbox.
However, I saw it causing similar turd polishing behaviour: Sensible code needing to be changed because it exceeded some obstinate metric, any kind of code movement causing existing issues to appear as "new", false positives due to incomplete language feature support, etc.