Camera

Google shares a deep dive into its new HDR+ with Bracketing technology found in its latest Pixel devices

Published

on

Google has shared an article on its AI Blog that dives into the intricacies of the HDR capabilities of its most recent Pixel devices. In it, Google explains how its HDR+ with Bracketing technology works to capture the best image quality possible through clever capture and computational editing techniques.

To kick off the article, Google explains how its new ‘under the hood’ HDR+ with Bracketing technology — first launched on the Pixel 4a 5G and Pixel 5 back in October — ‘works by merging images taken with different exposure times to improve image quality (especially in shadows), resulting in more natural colors, improved details and texture, and reduced noise.’

Using bursts to improve image quality. HDR+ starts from a burst of full-resolution raw images (left). Depending on conditions, between 2 and 15 images are aligned and merged into a computational raw image (middle). The merged image has reduced noise and increased dynamic range, leading to a higher quality final result (right). Caption and image via Google.

Before diving into how the behind-the-scenes work is done to capture the HDR+ with Bracketing images, Google explains why high dynamic range (HDR) scenes are difficult to capture, particularly on mobile devices. ‘Because of the physical constraints of image sensors combined with limited signal in the shadows […] We can correctly expose either the shadows or the highlights, but not both at the same time.’

Left: The result of merging 12 short-exposure frames in Night Sight mode. Right: A single frame whose exposure time is 12 times longer than an individual short exposure. The longer exposure has significantly less noise in the shadows but sacrifices the highlights. Caption and image via Google.

Google says one way to combat this is to capture two different exposures and combine them — something ‘Photographers sometimes [do to] work around these limitations.’ While this works fairly well with cameras with larger sensors and more capable processors inside tablets and laptops to merge the images, Google says it’s a challenge to do on mobile devices because it requires ‘Capturing additional long exposure frames while maintaining the fast, predictable capture experience of the Pixel camera’ and ‘Taking advantage of long exposure frames while avoiding ghosting artifacts caused by motion between frames.’

Google was able to mitigate these issues with its original HDR+ technology through prioritizing the highlights in an image and using burst photography to reduce noise in the shadows. Google explains the HDR+ method ‘works well for scenes with moderate dynamic range, but breaks down for HDR scenes.’ As for why, Google breaks down the two different types of noise that get into an image when capturing bursts of photos: shot noise and read noise.

Google explains the differences in detail:

One important type of noise is called shot noise, which depends only on the total amount of light captured — the sum of N frames, each with E seconds of exposure time has the same amount of shot noise as a single frame exposed for N × E seconds. If this were the only type of noise present in captured images, burst photography would be as efficient as taking longer exposures. Unfortunately, a second type of noise, read noise, is introduced by the sensor every time a frame is captured. Read noise doesn’t depend on the amount of light captured but instead depends on the number of frames taken — that is, with each frame taken, an additional fixed amount of read noise is added.’

Left: The result of merging 12 short-exposure frames in Night Sight mode. Right: A single frame whose exposure time is 12 times longer than an individual short exposure. The longer exposure has significantly less noise in the shadows but sacrifices the highlights. Caption and image via Google.

As visible in the above image, Google highlights ‘why using burst photography to reduce total noise isn’t as efficient as simply taking longer exposures: taking multiple frames can reduce the effect of shot noise, but will also increase read noise.’

To address this shortcoming, Google explains how it’s managed to use a ‘concentrated effort’ to make the most of recent ‘incremental improvements’ in exposure bracketing to combined the burst photography component of HDR+ with the more traditional HDR method of exposure bracketing to get the best result possible in extreme high dynamic range scenes:

‘To start, adding bracketing to HDR+ required redesigning the capture strategy. Capturing is complicated by zero shutter lag (ZSL), which underpins the fast capture experience on Pixel. With ZSL, the frames displayed in the viewfinder before the shutter press are the frames we use for HDR+ burst merging. For bracketing, we capture an additional long exposure frame after the shutter press, which is not shown in the viewfinder. Note that holding the camera still for half a second after the shutter press to accommodate the long exposure can help improve image quality, even with a typical amount of handshake.’

Google explains how its Night Sight technology has also been improved through the use of its advanced bracketing technology. As visible in the illustration below, the original Night Sight mode captured 15 short exposure frames, which it merged to create the final image. Now, Night Sight with bracketing will capture 12 short and 3 long exposures before merging them, resulting in greater detail in the shadows.

Capture strategy for Night Sight. Top: The original Night Sight captured 15 short exposure frames. Bottom: Night Sight with bracketing captures 12 short and 3 long exposures. Caption and image via Google.

As for the merging process, Google says its technology chooses ‘one of the short frames as the reference frame to avoid potentially clipped highlights and motion blur.’ The remaining frames are then aligned with the reference frame before being merged.

To reduce ghosting artifacts caused by motion, Google says it’s designed a new spatial merge algorithm, similar to that used in its Super Res Zoom technology, ‘that decides per pixel whether image content should be merged or not.’ Unlike Super Res Zoom though, this new algorithm faces additional challenges due to the long exposure shots, which are more difficult to align with the reference frame due to blown out highlights, motion blur and different noise characteristics.

Left: Ghosting artifacts are visible around the silhouette of a moving person, when deghosting is disabled. Right: Robust merging produces a clean image. Caption and image via Google.

Google is confident it’s been able to overcome those challenges though, all while merging images even faster than before:

Despite those challenges, our algorithm is as robust to these issues as the original HDR+ and Super Res Zoom and doesn’t produce ghosting artifacts. At the same time, it merges images 40% faster than its predecessors. Because it merges RAW images early in the photographic pipeline, we were able to achieve all of those benefits while keeping the rest of processing and the signature HDR+ look unchanged. Furthermore, users who prefer to use computational RAW images can take advantage of those image quality and performance improvements.’

All of this is done behind the scenes without any need for the user to change settings. Google notes ‘depending on the dynamic range of the scene, and the presence of motion, HDR+ with bracketing chooses the best exposures to maximize image quality.’

Google’s HDR+ with Bracketing technology is found on its Pixel 4a 5G and Pixel 5 devices with the default camera app, Night Sight and Portrait modes. Pixel 4 and 4a devices also have it, but it’s limited to Night Sight mode. It’s also safe to assume this and further improvements will be available on Pixel devices going forward.

You can read Google’s entire blog post in detail on its AI blog at the link below:

HDR+ with Bracketing on Pixel Phones

Source link

Leave a ReplyCancel reply

Trending

Exit mobile version