Companies could in principle train an in-house AI with their corporate knowledge, and will likely be tempted to do so in the future. But that also creates a big risk, because whoever manages to get their hand on a copy of that model (a single file) will instantly have unrestrained access to that valuable knowledge. It will be interesting to see what mechanisms are found to mitigate that risk.