8 examples of smartphone computational photography we take for granted

We all know that dedicated cameras tend to blow away mobile cameras for pure picture quality, but smartphones have a major draw-card in the form of computational photography.

Instead of relying on large sensors and mechanical apertures, computational photography sees smartphones using their processing oomph to generate some interesting effects. Don’t believe me? Then check out some of these examples…

Dynamic HDR/exposure/flash

Seen on the last few Microsoft/Nokia flagship phones, the Rich Capture technique allows you to adjust the level of HDR, exposure or camera flash after taking a photo.

Available as a general “Rich Capture” toggle in the camera app, dynamic HDR takes place during the day, dynamic flash takes place when you turn on the flash and dynamic exposure takes place at night with the flash disabled.

All three features are rather useful in general, but the dynamic flash is quite a crowd pleaser. After all, why choose between no flash and a ridiculously bright shot when you can find a nice middle ground?

Refocusing

First seen on Nokia handsets but since making its way to Samsung, LG, Xiaomi and other phones, the refocus feature allows you to take a snap that can be (surprise) refocused. In English, this means that you can focus on a different part of the picture after taking the snap.

Dual-camera smartphones are capable of instantly taking a refocused picture, but traditional mobile cameras require the user to stand still for a few seconds. This is because the device takes a few seconds to take snaps at different focal lengths.

Simulated aperture tweaking

Speaking of dual-camera devices, one of the coolest applications of two cameras is the ability to simulate aperture adjustments.

Unlike dedicated cameras, smartphones have a fixed aperture, so the amount of light let in and bokeh effects are generally set in stone.

Fortunately, it’s possible to simulate aperture adjustments with two cameras, as a dedicated mode on the Huawei P9 allows you to go down to F0.95 or up to F16. The results aren’t always polished, but they make for a rather cool feature anyway.

Food shots

Yeah yeah, meal pictures are done to death, but that hasn’t stopped companies like Sony and Samsung from innovating on this front with dedicated modes.

Both Sony and Samsung in particular have delivered food/gourmet modes, detecting the presence of food in a shot and cranking up the colours to make them look more appetising. It’s by no means a game-changer but it’s an interesting use of smartphone power.

Smarter selfie options

Say what you will about the Botox-like beautifying options in today’s mobile devices, but you can’t deny that they’re rather interesting features.

Thanks to mobile processing power, today’s selfie cameras are capable of “improving” your facial features or accentuating them, such as smoothing your skin, making bigger eyes (go figure) or even predicting your age (a silly but fun Xiaomi feature).

3D modelling

Sure, it requires two cameras or dedicated hardware to accomplish fully, but even your average smartphone is capable of some entry-level 3D modelling.

Thanks to apps like Seene, you can photograph/scan simple scenes (heh) and objects with a traditional mobile device, delivering some parallax-style effects in the process (tilting to view the scene, anyone?).

The HTC One M8 also delivered similar parallax effects, but the inclusion of a depth sensor aboard the phone meant you didn’t have to stand still for a few seconds.

360-degree photos

Dedicated 360-degree cameras are rather wonderful and deliver a polished experience, but smartphones pack enough horsepower to deliver 360-degree snaps too.

Applications like Google Street View mean that you can capture a 360 photo on your average mobile device. The only drawback is that it generally takes a lot of time to get a 360-degree photo done and they look unpolished at best.

Still, with the likes of Facebook and VR headsets supporting 360 photos, the technology will only get better and better.

Object removal

It’s not a native option anymore on Nokia/Microsoft and Samsung phones, but object removal technology is definitely another interesting example of computational photography.

The technique sees several photos taken, with users then able to remove objects that managed to pop up in between shots, such as photobombers or pedestrians. A cool feature, but one that hasn’t been used often, judging by Samsung seemingly burying the feature with its S7 range.

More

News

Sign up to our newsletter to get the latest in digital insights. sign up

Welcome to Memeburn

Sign up to our newsletter to get the latest in digital insights.