From the paper: "Could it be that changes done by the main author are faster and more predictable than corresponding changes made by a minor contributor? We suspect that author experience could be a factor that impacts the predictability of changes to low quality code."
If that turns out to be the case, then it would highlight organizational dimensions of low quality code: key personnel dependencies and on-boarding challenges.
Hopefully, this work can help developers when communicating with the product/business side. The sad reality is that without quantifiable metrics, short term targets will win over the long-term maintainability of the codebase. Only a minority of companies actively manage technical debt.
@john-shaffer if this is your tool can you provide a link to how the various metrics are computed?
edit: FAQ doesn't include the above. It does say the tool requires write access to GitHub repos it analyses.
Here's a link to some docs about how code health is calculated: https://codescene.io/docs/guides/technical/code-health.html
In simple terms we try to measure cognitive complexity of source code.
A more in-depth description of one of the factors: https://codescene.com/blog/bumpy-road-code-complexity-in-con...
I wish that I had this type of data myself back in the day. Would have been so much easier to push back on wishful deadlines.