I see both sides here, but I don't think it's a hill worth dying on. The 'open source' part in this case is just not currently easily modifyable. That may not always be the case.
I think the two plausible answers are:
1. The person prompting (for example telling chatgpt 'please produce a fizzbuzz program') owns the copyright. The creativity lies in the prompt, and the chatgpt transformation is not transformative or meaningful.
2. The output of ChatGPT is derivative of the training data, and so the copyright is owned by all of the copyright holders of the input training data, i.e. everyone, and it's a glowing radioactive bomb of code in terms of copyright that cannot be used or licensed meaningfully in open source terms.
There are existing things like 1, where for example if someone takes a picture, and then uses photoshop to edit it, possibly with the "AI erase" tool thingy, they still own the photo's copyright. Photoshop transformed their prompt (a photo), but adobe doesn't get any copyright, nor do any of the test files adobe used to create their AI tool.
I don't think AI is like that, but it hasn't gone to court as far as I know, so no one really knows.