The way we learn exact sentences is usually by getting an intuitive sense and applying corrections.
For example, to memorize "Doggo woofs at kity", we first get the concept of "dog barks at cat", it compresses well because intuitively, we know that dogs bark and cats are common targets. That's our lossy compression and we could stop there but it is only part of the story. It is not a "dog" but a "doggo", and it goes well with the familiar tone, a good compression algorithm will take only a few bits for that. Then there is the typo "kity" vs "kitty", it will take a bit of extra space, but again, a good algorithm will recognize the common typos and compress even that. So it means the entire process to lossless matters, lossy is just stopping halfway.
And if there is pure random noise remaining, there is nothing you can do, but all algorithms are on an equal footing here. But the key is to make what the algorithm consider as uncompressible noise as small as possible.