Make half of the tokens (the AI's "dictionary") slightly more likely.
This would not impact output quality much, but it would only work for longish outputs. And the token probability "key" could probsbly be reverse engineered with enough output.
It would be pretty easy to figure out against standard word probability in average datasets. Even then the longer this system runs the more likely it is to pollute its own dataset by people learning to write from gpt itself.