Sounds great, right? Well, I used the dual lenses to take a 3D still or video maybe, oh, half a dozen times. Why? One reason was that the screen hurt my eyes when in 3D mode. Another reason was that unless someone else had a 3D screen, I wouldn't be able to share the files with them. And another reason was that except for some gimmicky action effects in the movies, 3D isn't really all that spectacular.
But it sounds like what's going on here, with the different focal lens stuff, is a lot different than just a 3D gimmick, and I'm interested to see what can be made of it.
Actually, one thing mentioned in the article -- depth analysis, to generate blurred backgrounds -- would, in principle, work on the HTC EVO 3D. Actually, I'm kind of bummed now that I didn't look into whether any existing software could do it. I would have liked to have been able to generate 2D stills that have an algorithmically generated shallow depth of field, sort of like what the Lytro (https://www.lytro.com) light field camera does.
BTW, in case anyone's interested, I broke the EVO 3D about a month ago and got a Samsung Galaxy S4 Zoom (http://www.samsung.com/global/microsite/galaxycamera/s4zoom/) to replace it. The camera on this thing is amazing! It's got a 10x optical zoom and a Xenon flash, just like a regular point-and-shoot. Of course, when I'm talking on it in public, I end up looking like a dorkwad, because it appears as if I'm talking into a camera, not a cellphone, but it's totally worth it.
The 3D photo I took from the top of Mt Fuji is absolutely breathtaking, IMHO. Gives a completely different sense of scale than a 2D photo.
One important advantage I'd expect but not listed: an increase in effective dynamic range (the ability to capture more range in shadows and highlights in the same photo [1]) may be possible if there's a sensible way to interpolate data from the 3x and base focal length cameras (which seems to be the case, if the low-light claims are to be believed).
[1] http://cdn-4.nikon-cdn.com/en_INC/o/kiGHs2ZNM_El1gxcFVmhHA2R...
1) Two different focal lengths/zoom levels without having to use digital zoom
2) Better low light quality due to twice as much information
3) Better depth analysis: quicker autofocus, blurred backgrounds, augmented reality
Engineering : But... http://xkcd.com/1014/
Marketing : Hmm.. how about 2 sensors, I mean 2 at the back side alone. 3D TVs are a big hit. People would love them. I bet God had plans when he put 2 eyes instead of one.
Engineering : Whatever...
Sensor manufacturers : Yaay !
HN guys : Wow, I got a dual cam app idea that FB would love to acquire... Ain't 2 instagrams better than 1.
For example, how can it improve low light on the 1X shot for the pixels outside of the 3X frame?
As for shooting 3D with dual-lenses, I don't think there are many applications for that right now, but perhaps there will be in the future.
If for example recording video in 3D will make it look much better when watching it later with a VR headset, that could be pretty cool. It could also be used for creating models/avatars of yourself, again probably most useful in VR worlds/games, and other stuff like that.
But the two lenses are taking the picture from different angles, and with different focal lengths! I don't understand how matching of pixels is possible without an error greater than the noise they are trying to remove... It definitely can't be equivalent to one large sensor, as they claim. Unless they can somehow route the image from one lens to both sensors? Is that even possible?
I am not mocking this, but am really interested. Because it seems like this idea would have already been tried & if it were good would already have existed mainstream for cameras.
The literal answer to this question is that most serious cameras have interchangeable lenses; I shoot with an Olympus OMD EM5 and primarily use three primes. A lot of people use zoom lenses, or primarily use zooms.
If you're only referencing cell phones, I assume that there is an obvious trade-off between size and putting another lens in the camera.
This is one of the reasons some really old color movies were able to be restored so well. They filmed it with 3 lenses with one for each color (RGB). Since there were 3 sources of data the restoration it was much easier to find the best image. http://en.wikipedia.org/wiki/Separation_masters
But as far as taking photos in 3D, that doesn't sound like the main (or even a) drive behind this.
This new crop of phones doesn't save 3D images you can view on a 3D screen like the Evo 3D, though. It just uses the depth information for other things like blurring backgrounds and Kinect like computer vision.
These techniques are very hard to reproduce, however. Artificial 3D sensors lean pretty heavily on some sort of triangulation, because it can be formulated mathematically.