Chip designer ARM crafts the technology powering all smartphones today, whether it’s the instruction set or the CPU cores, the UK company is behind many key technologies.
However, Huawei and Apple beat ARM to the punch with dedicated mobile AI silicon, in the form of the Kirin 970’s NPU and A11 Bionic’s neural engine respectively.
No ad to show here.
Now, ARM has revealed its own ML (machine learning) processor, being designed for on-device inference. In other words, devices with this chip wouldn’t have to hook up to the cloud as often.
The UK company claims up to 4.6 TOPs (trillion operations per second) of performance for the silicon. Meanwhile, Chinese brand Rockchip claims 2.4 TOPs from its newly revealed NPU, while Nvidia’s Tesla P40 data centre GPU tops out (heh) at 40 TOPs.
The meaning behind these chip measurements are still somewhat murky, as the field is in its early days yet, with some firms pointing to floating point operations per second (FLOPs) instead. Nevertheless, it gives us a decent idea of what to expect for some AI-related tasks.
It’s also worth noting that the processor is scalable, allowing companies to (presumably) harness the power of multiple chips and gain a boost of up to 70 TOPs.
ARM is giving mobile manufacturers a machine learning boost with its AI processor
The ML processor, optimised for ARM’s own cores and Mali GPUs, isn’t only intended for smartphones. In fact, the UK company says the AI chip can be used in the IoT, automotive, AR/VR, medical, robotic, drone, wearable and logistics fields.
And yes, the ML processor supports the TensorFlow, TensorFlow Lite, Caffe and Caffe 2 frameworks.
ARM is also launching a second-generation object detection (OD) processor, which can be used in conjunction with the ML processor to deliver object identification and detailed people modelling at up to 1080p/60fps.