Twitch has provided an update on a security leak it experienced earlier this month, confirming it did not expose users’ login credentials. In a…
The ML (machine learning) processor is designed for more offline inference, reducing the need to hook up to the cloud for machine learning tasks. This switch to more on-device machine learning is poised to deliver benefits in terms of privacy and data usage.
However, the ML processor isn’t going to hit production smartphones this year, ARM told Gearburn.
“We’re already engaged with lead partners, with final release of the processor due in the summer (2018) and end product devices typically expected 9-12 months after that,” the company told us in an email Q&A.
On benchmarks and AI chips in budget phones
The processor also packs 4.6 TOPs of performance, but there seems to be divergence when it comes to measuring AI chip performance. ARM and Rockchip use TOPs, while Huawei has sought to use FLOPs.
“As with any new field, the benchmark requirements take a little time to settle down. In the absence of agreed, meaningful benchmarks reflecting real-world use-cases, people can fall back on the architectural numbers from their designs (triangles/second, MACs etc.),” the UK chip designer explains.
“These are useful, but don’t tell the whole story. ARM is working across the industry to help build consensus around more realistic benchmarks, and will quote these when we get there. In the meantime, we believe TOPs is reasonable — FLOPs refers to floating-point operations, which are less useful in inference at the edge, which is where the ARM ML processor is pitched. We quote TOPs which refers to tera-Operations per second (trillion operations per second), where those operations are on 8-bit integers.”
Could we expect AI chips in budget phones in the future? The chip designer reckons that while there is indeed demand, cost and silicon considerations are the overriding factor.
“In the higher tiers of the smartphone market the demand for greater ML capabilities is clearly established, but we’re also seeing strong demand in the lower tiers. However, in these lower tiers, with their greater cost and silicon area sensitivities, we are seeing interest more use (sic) of the ARM CPUs and GPUs in the system, in implementations optimised in favour of smaller die areas and, in some cases, smaller configurations on our ML processor.”
ARM’s new AI chip is set to arrive in phones next year, but it’s coming to other platforms as well
The move to AI chips certainly holds some promise, as Apple uses the tech for facial recognition and animojis, while Huawei uses it for language translation, system management and scene/object recognition. But has the technology arrived too soon?
ARM reckons that “now is exactly the right time to announce what we are doing”.
“It’s important to remember that enabling ML isn’t about any one component. To enable ML, device manufacturers need various things to come together — that’s why we have been developing the SW (software) necessary to enable ML on our platforms,” the firm answered.
“But even then, there is a virtuous circle between the HW (hardware) available and the SW application space. There’s no point writing a fantastic software application for HW that doesn’t exist. But we feel the market conditions are right to enable the HW in the market and provide what’s needed for the SW developers to really start to realise their visions around ML.”