What's really cool is you can see him talk about a lot of these ideas well before they made it into the Pixel phone
Plus, if you're at all curious about the technical details for how exactly something like Night Sight is implemented on the Pixel, understanding what Fourier transformations are and how they are utilized is vital.
https://ai.googleblog.com/2017/04/experimental-nighttime-pho...
And by the original researcher in 2016:
Is there anything expected to be released in the next few months that will be in a similar price, feature set and weight class?
That is absolutely impressive.
The color and text on the fire extinguishers along with the texture detail seen in the headphones in the last picture are just stunning. Congratulations to anyone who worked on this project!
It's too bad that the technology is proprietary. I'm curious what could be done with a larger-sensor camera, from compact cameras to DSLRs.
I'm guessing that it works similarly to low-budget astrophotography but with the computer doing all the busywork for you: when you want to photograph stars or planets and you don't have a fancy tracking mount to compensate for earth's rotation you'll have very mediocre results with long exposure. If you expose a lot to see the object clearly then you get motion blur. If you use a shorter exposition to reduce the blur you don't have enough light to get a clear picture.
One solution is to take a bunch of low-exposure pictures in a row and then add them together (as in, sum the value of the non-gamma-corrected pixels) in post while taking care of moving or rotating each picture to line everything up. This way you simulate a long-exposure while at the same correcting for the displacement.
An other advantage is that you can effectively do "HDR": suppose that you're taking a panorama with the milky way in the sky and some city underneath it, with a long exposure the lights of the city would saturate completely. With shorter exposures you can correct that in post by scaling the intensity of the lights in the city as you add pixels (or summing fewer pictures for these areas). This way you can effectively have several levels of exposures in the same shot and you can tweak all that in post. In the case of the city/milky way example you'll also need to compensate for the motion in the sky but obviously not on land which is also something you can't really do "live".
I have a strong suspicion that it's basically what this software is doing: take a bunch of pictures, do edge/object detection to realign everything (probably also using the phone's IMU data), fit the result on some sort of gamma curve to figure out the correct exposition then add color correction based on a model of the sensor's performance under low light (since I'm sure by default under these conditions the sensor will start breaking down and favor some colors over others). Then maybe go through a subtle edge-enhancing filter to sharpen things a bit more and remove any leftover blurriness.
If I'm right then it's definitely a lot of very clever software but it's not like it's really "making up" anything.
Chemical reactions by bacteria breaking down food produce light, enough for humans to see in only the darkest of places (if you live in a city, you won't ever encounter dark enough situations).
A camera simulating a 1 hour exposure time in a closed refrigerator ought to be able to see it pretty easily.
But I didn't find anything on bioluminescence occurring naturally in the kinds of bacteria you'd want to be warned about. Did you ever personally see glowing food?
[1] http://cdn.intechopen.com/pdfs/27440/InTech-Use_of_atp_biolu...
It's very faint and would be difficult to notice without trees to shield it from moonlight. A camera could pick it up with a long exposure.
https://en.wikipedia.org/wiki/Luciferase
"Luciferase is a generic term for the class of oxidative enzymes that produce bioluminescence"
and,
"Bioluminescence is the production and emission of light by a living organism"
Some city folk have doors and window shades. My old apartment kitchen was on the windowless side of the apartment with a door. If you close the door (and unplug the microwave), it was pitch black. Though I never saw any glowing food, not even the spoiling fruit on the counter.
(I'm talking about developing sheet film in trays, btw... you don't need a darkroom to develop rolls or 4x5 sheets.)
I don't know if its really that bad of an idea, but we didn't allow projects that had been done before so they were all rejected.
I'd be interested to see how night mode performs when objects in the frame are moving (it should work fine, since it will track the object), or changing (for example, turning pages of a book - I wouldn't expect it to work in that case).
I must imagine that the sensor is doing an extra but un-perceptible long exposure than then is used to correct the lightning of the dark version.
That said, the effect of some of these photographs is striking, and I'm sure the tech is interesting.
See: https://www.celsoazevedo.com/files/android/google-camera/
Upgrading from a 3-year old Samsung S6, where I could almost see the battery percentages drop off percent by percent, the P20 Pro's 4000 mAh battery has been great (too bad the wireless charging didn't appear until the new Mate P20 Pro).
Except the Huawei does and in actual same-setting situations the results are better than the Pixel.
Is a pure software solution even reliable enough under these conditions? Slowness can be worked around by doing it in the background, and you get a notification when it's complete. Some people would be okay if that's the only way to get photos they wouldn't otherwise be able to get, short of buying an SLR.
I really want to know how that works for people! 99% of photos I take are of people, and the lighting is always bad.
Are there any photos of people?
I wonder if this technology will eventually supercede military night vision goggles. Having the ability to add color perception at long distances could have useful for identifying things at night.
"Google’s Night Sight for Pixel phones will amaze you"
Pre-OIS Google did this with image stacking which was a ghetto version of a long exposure (stacking many short exposure photos, correcting the offsets via the gyro, was necessary to compensate for inevitable camera shake). There is nothing new or novel about image stacking or long exposures.
What are they doing here? Most likely it's simply enabling OIS and enabling longer exposures than normal (note the smooth motion blur of moving objects, which is nothing more than a long exposure), and then doing noise removal. There are zero camera makers who are flipping their desks over this. It is usually a "pro" hidden feature because in the real world subjects move during long exposure and shooters are just unhappy with the result.
The contrived hype around the Pixel's "computational photography" (which seems more incredible in theory than in the actual world) has reached an absurd level, and the astroturfing is just absurd.
Stacking is quite the opposite of a "ghetto" version of a long exposure - it's the fundamental building block of being able to do the equivalent of a long exposure without its associated problems (motion blur from both camera and subject, high sensor noise if you turn up the gain, and over-saturating any bright spots).
Stacking is the de facto technique used for DSLR astrophotography for exactly these reasons -- see https://photographingspace.com/stacking-vs-single/
However, you're ignoring the _very substantial_ challenges of merging many exposures taken on a handheld camera. Image stabilization is great, but there's a lot of motion over, say, 1 second on a hand-held camera. Much more than the typical IS algorithm is designed to handle.
The techniques are non-trivial: http://graphics.stanford.edu/talks/seeinthedark-public-15sep...
There's a lot going on to accomplish this. It starts with the ability to do high-speed burst reads of raw data from the CCD (so that individual frames don't get motion blurred, and raw so you can process before you lose any fidelity by RGB conversion), and requires a lot of computational horsepower to perform alignment and do merging. I don't know what the Pixel's algorithms are, but merging of many images with hand-held camera motion benefits from state of the art results in applying CNNs to the problem, at least, from some of the results from Vladlen Koltun's group at Intel (who I'd put at the forefront of this, along with Marc Levoy's group at Google):
http://vladlen.info/publications/learning-see-dark/
I wouldn't be so quick to dismiss the technical meat behind state of the art low-light photography on cell phones.
You literally repeated exactly what I said image stacking was, yet lead off by claiming that I don't know what I'm talking about. Classic.
The goal of both is to achieve the exact same result -- more photons for a given pixel. Stacking is a necessary compromise under certain circumstances -- lack of sufficient stabilization, particularly noisy sensor or environment, etc.
Further, this implementation is clearly long exposures (note the blur rather than strobe).
Why? Because 99.9999% of smartphone photos in real use (e.g. not in a review), give or take 100%, are of people. People move. Long exposures just lead to bad outcomes and blurred people.
I mean seriously search the net for Pixel 3 night mode. It's like the Suit Is Back. They're even using the same verbiage across them. And the uproarious nonsense about Google using AI to colourize is just...well a place like HN should just be chuckling at it.
I honestly don't know which part you're doubting. Long exposures? Do you doubt that other cameras can do long exposures? Do you doubt that they can do noise reduction? Do you doubt that OIS allows for hand-held long exposures, especially on wide-angle lenses? What are you doubting, because these are all trivial things that you can validate yourself.
As to examples, you're wide-eyed taking a puff piece with some absolutely banal examples and exaggerated descriptions -- and zero comparable photos from other devices -- by someone who apparently knows very little about photography. How should I counter that? I can find millions of night streetscape photos that absolutely blow away the examples given.
Generally if you're going to pander to a manufacturer, you at least talk about things like lux. In this case it's just "look, between this setting and that setting it's different, therefore no one else can do it".
Any thoughts on why Apple, as the other leading phone maker with a heavy emphasis on camera quality, has not implemented anything like it? Not to discount the difficulty, but OIS aligned long exposures kind of seems like low hanging fruit. Instead, they keep trying to open the aperture more.