Most definitely - I'm sure there's at least dozens of implementations of the idea you can find just by googling.
I wrote it because I'd directly seen or heard colleagues complain of real proposals to use git KPI as serious, primary perf measures for ICs. Linking to the satirical example of why poor measures result in poor outcomes can be a pretty effective tool for resolving the discussion to a good place ;).
This was a fun project to figure out and frame with the readme copy.
From the GH perspective I'm guessing they run some offline batch aggregation jobs to roll these stats based on the 24h timeframe. I capped the total commits to an arbitrary but polite # to avoid DoSing their stats pipeline after skimming ToS and seeing nothing that said - don't do this. It appears that they do monitor for excessive usage and care more about large objects than many commits: https://docs.github.com/en/github/site-policy/github-accepta...