Computational photography: Why Google says its Pixel 4 camera will be so damn good

CNET | 10/15/2019 | Stephen Shankland
iVchan (Posted by) Level 3
Click For Photo: https://cnet3.cbsistatic.com/img/kMYhIr0Np6TWXZKVsYSsWHdtYR4=/756x567/2019/10/15/a46c2e36-d88e-4dd8-8685-89d438231044/google-pixel-4-event-nyc-10-15-19-cnet-257.jpg




The Pixel 4 has three cameras and uses computational photography under the hood.

When Google announced the camera of its new Pixel 4 on Tuesday, it boasted about the computational photography that makes the phone's photos even better, from low-light photography with Night Sight to improved portrait tools that identify and separate hairs and pet fur. You can even take photos of the stars. What makes it all possible is something called computational photography, which can improve your camera shots immeasurably, helping your phone match, and in some ways surpass, even expensive cameras.

Google - Apple - Marketing - Chief - Phil

Google isn't alone. Apple marketing chief Phil Schiller detailed the iPhone 11's new camera abilities in September as well, calling computational photography "mad science."

But what exactly is computational photography?

Processing - Camera - Hardware - Example - Color

In short, it's digital processing to get more out of your camera hardware -- for example, by improving color and lighting while pulling details out of the dark. That's really important given the limitations of the tiny image sensors and lenses in our phones, and the increasingly central role those cameras play in our lives.

Heard of terms like Apple's Night Mode and Google's Night Sight? Those modes that extract bright, detailed shots out of difficult dim conditions are computational photography at work. But it's showing up everywhere. It's even built into Phase One's $57,000 medium-format digital cameras.

Photography - Benefit - HDR - Range - Sensors

One early computational photography benefit is called HDR, short for high dynamic range. Small sensors aren't very sensitive, which makes them struggle with both bright and dim areas in a scene. But by taking two or more photos at different brightness levels and then merging the shots into a single photo, a digital camera can approximate a much higher dynamic range. In short, you can see more details in both bright highlights and dark shadows.

There are drawbacks. Sometimes HDR shots look artificial. You...
(Excerpt) Read more at: CNET
Wake Up To Breaking News!
Sign In or Register to comment.

Welcome to Long Room!

Where The World Finds Its News!