No ad to show here.

6 fascinating smartphone features enabled by AI chips

Huawei Kirin 970,Kirin 970,Kirin,ai chips, kirin 990

The past two months have seen both Apple and Huawei launch smartphones, with each device expected to sell by the millions.

But beneath the dual camera trickery, cutting-edge biometrics and “courage”, we’re also seeing the introduction of silicon devoted to artificial intelligence (AI).

No ad to show here.

The Huawei Mate 10 was first out the gate, its Kirin 970 processor featuring a dedicated “Neural Processing Unit” or NPU to speed things up.

Meanwhile, Apple’s iPhone X packs a screaming A11 processor, which also touts a “Neural Engine”. This “engine” is essentially a dual-core chip housed within the image signal processor.

So it’s all good that these phones are packing hardware for AI calculations, but the real question is just what’s the point of it all? Is it a case of marketing? Is it a game-changing technology? Well, here are a few improvements the tech could (or will) bring.

More tasks done while offline

Generally speaking, AI chips can complete some tasks without the need to connect to the cloud. You see, right now, it’s faster to just send the data over an internet connection, have it processed and have the answer sent back to the phone, instead of parsing the data on the phone.

But Huawei and Apple are hinting at a more offline-focused future for AI. In fact, Huawei is using its NPU to accelerate Microsoft’s language translation tool, allowing for improved offline inference on the Mate 10.

Better system performance?

Huawei already touts machine learning on EMUI 5 (its Android overlay) in order to optimise system resources and keep things smooth over time. But it claims that the NPU will step things up a notch.

Huawei’s Richard Yu claims a 60% improvement in “system response speed” and a 50% improvement in “smoothness of operation” on the Mate 10 due to the AI silicon.

Improved authentication

Biometric authentication such as fingerprint and facial scanning can be a data-heavy process. Thousands of data-points need to be processed, which can result in a delay if the hardware isn’t up to scratch.

To this end, Apple is using its Neural Engine in conjunction with its Face ID tech on the iPhone X. During the September event, Apple revealed that, not only was it using the AI hardware to process those data points, but it was also using the hardware to constantly improve the facial map.

Even Samsung could be getting in on the action, investing in DeePhi, an AI startup that has its own DPU (deep-learning processing unit) platform. And one of the touted tasks by the platform? Facial recognition.

Fun emoji-related stuff

One of the more crowd-pleasing announcements at the iPhone event was the Animoji reveal, being emoji that are animated by your own face. It’s a weird concept but it also relies on AI smarts.

In fact, Apple’s website explicitly states that the Neural Engine powers the machine learning algorithms used to create Animoji. After all, mapping your facial expressions and lip syncing to an avatar isn’t an easy endeavour.

I wouldn’t be surprised if other companies with facial masks and the like (looking at you, Snapchat) harness the Neural Engine and other AI chips to give the feature a kick in the pants.

Drastically improved photography

Computational photography is a big reason why smartphone camera quality has improved in the last few years, as mobile silicon becomes ever faster. Take a look at the HDR+ mode in the Pixel and Pixel 2, using algorithms and machine learning (driven by powerful hardware) to process images for the best results.

Huawei has upped the ante with the Mate 10, using the NPU to offer faster object/scene recognition in the camera app, automatically adjusting settings. And Huawei says the recognition will only get better over time, pushing out updates that offer more recognised scenes and objects. Heck, the Chinese firm says it’s also using the NPU to power its trademark depth-of-field effects.

Speaking of these effects, we’ve also seen Google’s use AI smarts for the Portrait Mode in the Pixel 2. And as it turns out, the Mountain View company enlisted the help of Intel to create a “Pixel Visual Core” to speed things up in this regard.

Better augmented reality?

The past year has seen both Apple and Google make a dramatic improvement in augmented reality. Previously, we had very basic augmented reality functionality that lacked proper support for object placement and tracking.

But 2017 has seen Apple’s AR Kit and Google’s ARCore being rolled out, offering a more advanced take on AR without resorting to multiple cameras and sensors. If that’s not enough, Apple says its new A11 chip will improve AR thanks to the Neural Engine’s smarts.

“The Neural Engine in A11 Bionic is purpose-built for machine learning, augmented reality apps and immersive 3D games,” read an excerpt from Apple’s website. How exactly will this be better than bog-standard AR Kit and ARCore apps? We don’t know, but I’m betting on things being a smidgen faster and more accurate, for one.

The wait for third-party apps

Dedicated AI hardware is such a new field that even the manufacturers themselves can’t anticipate what will come as a result. Sure, we’ve got the aforementioned improvements to photography, offline functionality, system stability and security, but what else can we expect?

It’s to this end that Huawei has actively courted developers to take advantage of the NPU during its Kirin 970 and Mate 10 events. Apple also announced a machine learning API earlier this year, but it hasn’t courted developers to harness the Neural Engine just yet.

In any event, it seems like we’re in the formative years of the technology, with the possibilities being rather intriguing for now. Will we be seeing an AI chip in every flagship next year?

No ad to show here.

More

News

Sign up to our newsletter to get the latest in digital insights. sign up

Welcome to Memeburn

Sign up to our newsletter to get the latest in digital insights.

Exit mobile version