Then to use it for stego just take your headerless ciphertext and decompress it. Tada: you get model output. To decode the stego just compress it. Assuming everything was built right and that your ciphertext is uniform, the output should be indistinguishable from the model just sampling using an RNG.
As a bonus, you don't even have a stego tool on your computer you just have a particularly carefully constructed text compressor and decompressor that is perfectly usable (and even state of the art in compression rate, given a big enough model) for the compression application.
Now AI has done this for us. I suppose that under authoritarian regimes you will soon have to cryptographically prove that you generated your random bits deterministically from specific keys.
I guess that's sarcasm but I don't really get it.
Just in case you mean it, that doesn't seem even remotely technically doable to me. And even if you managed to make people generate all randomness from a fixed PRNG with key escrow, you would have to check what they did with that randomness. If you're willing to go that far, the obvious way to go is to just ban the internet completely (and perhaps also people talking to each other).
But I don't see how these can be effectively used in text content. Yes, an AI program can encode provenance identifiers by length of words, starting letters of sentences, use of specific suffixes, and other linguistic constructs.
However, say that I am a student with an AI-generated essay and want to make sure my essay passes the professor's plagiarism checker. Isn't it pretty easy to re-order clauses, substitute synonyms, and add new content? In fact, I think there is even a Chrome extension that does something like that.
Or maybe that is too much work for the lazy student who wants to completely rely on ChatGTP or doesn't know any better.
I'm confused why you focus on plagiarism detection. That being said, your scenario is very briefly mentioned in the conclusion and requires augmenting the approach (entropy coding) with error correction.
The result would be that as long as your modifications (reordering clauses, etc.) reasonably closely follow a known distribution with limited entropy (which I think it clearly does, although specifying this distribution and dealing with the induced noisy channel might be very hard), there will be a way to do covert communication despite it, though probably only a very small amount of information can be transmitted reliably. For plagiarism detection, you only need a number of bits that scales like -log[your desired false positive rate] so it would seem theoretically possible. Theoretically it also doesn't matter if you use text or images, though in practice increasing the amount of transmitted data should make the task a lot easier. However, I'm not sure if something like this can be practically implemented using existing methods.
Its direct practical may be potentially somewhat limited because people aren't going around communicating randomly selected LLM outputs... and if you use LLM output in a context where text would be expected it could be distinguished.
It's not useful for watermarking as the first change will destroy all the rest of the embedding.
I can make a contrived example where it's directly useful: Imagine you have agents in the field, you could send out LLM generated spam to communicate with them. Everyone expects the spam to be LLM generated, so it's not revealing that its detectable as such. This work discusses how you can make the spam carry secret messages to the agents in a way that is impossible to detect (without the key, of course) even by an attacker that has the exact spam LLM.
Less contrived, a sufficiently short message from a sufficiently advanced LLM is probably indistinguishable from a real post in practice. -- but that's outside of the scope of the paper. It's hard (impossible?) to rigorously analyze that security model because we can't say much about the distribution of "real text" so we can't say how far from it LLM output is. The best models we have of the distribution of real text are these LLMs, if you take them to BE the distribution of real text then that approach is perfectly secure by definition.
But really, even if the situation it solves is too contrived to be a gain over alternatives, the context provides an opportunity to explore the boundary of what is possible.
Also, image based steg: