No difference in outputs with that change either.
If LLMs lost instruction context that easily they wouldn't be able to attempt to summarize any article posing a question, containing command examples, or using quotes of others being tasked with something. Since LLMs seem to handle such articles the same as any other article this kind method isn't going to be a very effective way to influence them.
Eventually, if you threw enough quantity in and nothing was filtering for only text visible to the user, you may manage to ruin the context window/input token limit of LLMs which don't attempt to manage "long term" memory in some way though. That said, even for "run of the mill" non-AI crawlers, filtering content the user is unable to see has long been a common practice. Otherwise you end up indexing a high amount of nonsense and spam rather than content.