When I was putting together a simple and fast method, a while back, I compared my own to the very, very, basic and ended up with this [0].
The far left is the original, the others are just shifting the scale percentage. There's a surprising amount of detail kept, even though all of the algorithms were pushed way beyond what should be considered their limits. (Purposefully - to expose bias that was easier to analyse.)
[0] https://raw.githubusercontent.com/shakna-israel/upscaler_ana...
Irish Setter example seems to introduce detail that is not part of the original small image, like the lighter/whitish area between the dog's eyes.
High Fidelity Image Generation Using Diffusion Models - https://news.ycombinator.com/item?id=27858893 - July 2021 (19 comments)
This can introduce a dataset shift bias. For example, if you train a network to upscale 1080p movie frames to 4k, the results might be disappointing when you try to scale 4k to 8k.