- Viewing distances. I only sit about 16-18" away from my 13" MBP screen and only about 24" from my 24" display. This varies obviously as I don't sit in a locked position all day, but I think he's erring a bit too high on estimated view distance, which means his necessary resolution to reach "retina" level is too low.
- Screen size. Right now the 13" MBP I'm staring at has a very significant bezel that I would like to see mostly go away in an upcoming model refresh. The iPad's bezel makes sense since it's meant to be held in the hand. The MBP only needs enough bezel to fit the camera up top and needs none on the sides or bottom of the screen.
But yes, his overall point that Apple does not need to go so far as screen doubling on laptops and desktops to achieve pixels that are indistinguishable to the human eye is correct. I just think the resolution at which that point is reached on laptops and desktops is a bit higher than what he's calculated.
I've had a quite a bit of feedback that I've moved the viewing distances too far out. I measured from my own experiences, but I guess I must be atypical. It might be because I use dual monitors on the desktop (I have a 27" and a 26", so I sit back to reduce head turning) and slump when I have a laptop in my lap.
Anyway, I've expanded the spreadsheet that goes with the post to include some extra settings for closer distances and for your specific distances and devices.
Hope this helps!
Here's the spreadsheet: https://docs.google.com/spreadsheet/pub?key=0Aq8W2-V7OXqfdGV...
Today you have to deliver .png, @2X.png and *~ipad.png image sets with your app. And, there is no off-the-shelf way to reuse @2X images with the iPad when in most cases they'll work just fine. You can, but it requires creative coding.
Still, this results in app packages that are bloated with image assets in triplicate and now soon to add a fourth version.
If you build a universal app it seems that even someone downloading your app onto an iPod Touch is going to end-up with @2X, ~ipad and ~ipad2X (or whatever) images that the app will never use.
Maybe this is the beginning of the end of the universal app?
Applications running in iOS 4 should now include two separate files for each image resource. One file provides a standard-resolution version of a given image, and the second provides a high-resolution version of the same image. The naming conventions for each pair of image files is as follows:
Standard: <ImageName><device_modifier>.<filename_extension> High resolution: <ImageName>@2x<device_modifier>.<filename_extension> The <ImageName> and <filename_extension> portions of each name specify the usual name and extension for the file. The <device_modifier> portion is optional and contains either the string ~ipad or ~iphone. You include one of these modifiers when you want to specify different versions of an image for iPad and iPhone. The inclusion of the @2x modifier for the high-resolution image is new and lets the system know that the image is the high-resolution variant of the standard image.
That said, it would be cool if Apple produced four different variants of an app and would send you the proper one depending on the device you download it on. When downloading with iTunes, your machine would download and store all four versions.
However, this is never ever going to be as cheap as loading a converted PNG (which Apple's modified pngcrush converts for you). I think a lot of devs have the draw/vector vs. precomposed bitmap tradeoff the wrong way round.
Drawing all of your gradated UIButtons with CoreGraphics methods is a false economy compared to just loading a stretchable PNG. Almost all of Apple's UI system imagery is bitmap based, and for a good reason.
[1] http://mattgemmell.com/2012/02/10/using-pdf-images-in-ios-ap...
All it does is bloat up the package and limit what devs can do and still remain in the downloadable-over-3G limit.
Disclaimer: I'm the author.
This is going to be different for each resolution depending on the distance that you view it at, I built a quick image that you can test this on. http://dl.dropbox.com/u/1437645/alias.html Put that on your phone or desktop and see how far you have to step back before the aliasing affect disappears
Doesn't this suggest that Apple is getting bitten by backward compatibility to the mass of pre-existing apps, just like Microsoft got stuck with the mass of existing software running on Windows (and also actually users who get too used to existing UIs/UX)?
Apple tried to bring resolution independence to Mac OS X since quite some time, and in all honesty it worked well... for vector stuff. It broke in varying ways across iterations of it every time there was a bitmap involved, in which case they were at best either blurry or unscaled. There's no miracle, unless you generate bitmaps for numerous multiple sizes (like in icns files, where they range from 16x16 to 512x512, downsampled if scaling is needed, like on the Dock), initially small bitmaps will just look bad unless you use a 2x factor, in which case you will at best have no improvement (but no loss either) over a non 2x screen, or you have an uncanny effect when a 'fat pixel' bitmap stands near a 'thin pixel' vector curve. Anyway as noted by robomartin, things are sufficiently bloated already not to include full-scale 16->512 bitmaps.
What's more 2x is computationally way simpler and much less costly for everyone. The only non-hackish, seriously viable alternative is to go all the way vectorized. A typical case of 'less is more'/'worse is better' if you ask me.
The human senses have an upper limit of resolution, once we reach that limit further progress is irrelevant. So once everyone is streaming netflix at limitx2, Where does further bandwidth/storage demand come from? Growing populations? There's a limit to that growth. "big data"? Hardly.
We're rapidly approaching the point where individuals' need for further storage is exhausted. I think it'll be somewhere in the 10-100PB range. Which is pretty damn close.
Interestingly, he also predicted that "it will probably be about seventeen years before these perfect monitors are commonplace", though I think he was just looking at VRAM requirements.
Eventually though, innovation stops (I don't see many new radio sets today). When that happens, the product becomes almost a commodity, and another product improves so greatly upon it that the other product can be considered a new product.
Television, for example, replaced radio (two senses versus one sense). The internet, arguably, appeals to the same senses, but allows for user-created content, more freedom, etc.
When we reach the upper bound of innovation for television sets, it too will become a commodity. Perhaps it will eventually be replaced with a product that not only exceeds the capabilities of human sound and sight (for television to hit the bound for innovation, this must happen), but also incorporates something else: maybe it's another sense, maybe it's something more convenient (I guess the internet, could, to some extent, be considered as an evolution of television).
Assuming also that one voxel is 32 bits and not compressed, then the hologram would be 435,600 bytes per frame per cubic inch. At 24fps (you did say "film") that's 10,454,400 bytes per second per cubic inch.
Let's say it projects a hologram to fill a room the dimensions of a Star Trek-style holodeck, a cube of maybe 10 metres on a side. That's about 400 inches on a side, or 64,000,000 cubic inches. That means that a holographic film would be 669,081,600,000,000 bytes (608.5 terabytes) per second.
So, to answer your question, a 2 hour holographic film would be 4,817,387,520,000,000,000 bytes (4.178 exabytes) in size.
Here's hoping for an awesome chip that will make all of the graphics production rework worth it..