http://ejohn.org/blog/using-waifu2x-to-upscale-japanese-prin...
Glasner et al. "Super-resolution from a single image" Freeman et al. "Example-based super-resolution"
If we look at most of the literature around upscaling, this method is used pretty frequently.
For a more comprehensive look at using CNNs for image upscaling, see e.g. http://research.microsoft.com/en-us/um/people/kahe/publicati...
At Flipboard, we did not have time to do a full comparison of related upscaling research, but we were happy with the low amount of error our CNN achieved.
How is that any different from just not capturing half the picture data? I don't see how it would be. You do realize a digital camera is just an array or sensors, right? What happens if your camera has half as many sensors? The same things as what they did, you have half as many pixels.
If the picture is a collection of summed sin waves, maybe. If the big picture is just sampling more frequently, then maybe it's cheating by looking at the encoding. the smaller resolution will have sampling problems, it'll lose higher frequency data, because it's not sampled enough.
I dunno. I can see the op's point. Maybe there are artifacts introduced by scaling down. Still, regardless of the mechanism, information theory tells us there's no lossless compression. Information is lost, and the NN needs to make something up to fill in the blanks. Looks better than bicubic to me!