It's not as simple as that though. What if I train an AI to output a book given its title. With a sufficiently expressive and overfit network it could actually memorise the training data exactly.
That means my network now includes an exact copy of those books (encoded in a weird way but that is irrelevant). Should I be able to distribute my weights and ignore the copyright on the original books? Of course not!
In this case it is very unlikely that the network weights store much data from each individual github project, but it's definitely a sliding scale.