Neural Accelerator Battle Begins EE Times

Neural Accelerator Battle Begins EE Times

PARIS — The embedded market for neural network accelerators is heating up, with more systems — ranging from smart speakers and drones to light bulbs — poised to run neural networks locally instead of going back to the cloud for computation.

Remi El-OuzaaneRemi El-Ouzaane


In an interview with EE Times, Remi El-Ouazzane, vice president and general manager of Movidius, defined the growing trend as “a race for making things smart and autonomous.”

Movidius, an Intel company, launched Thursday (July 20) a self-contained AI accelerator in the form of a USB stick. Called Movidius Neural Compute Stick, it is designed to plug simply into Raspberry Pi or X86 PCs. The Neural Compute Stick makes it easier for university researchers, independent software developers and tinkerers to compile, tune and accelerate deep learning applications for embedded systems, said El-Ouazzane.

Movidius, acquired by Intel last fall and now a part of Intel’s New Technology Group, developed Myriad 2 VPU, the industry’s first always-on vision processor. El-Ouazzane said his ultimate goal in rolling out the stick is to make Movidius VPU “a reference architecture” for neural networks running at the edge.

Movidius Myria 2 VPU block diagram (Source: Intel/Movidius)Click here for larger imageMovidius Myria 2 VPU block diagram (Source: Intel/Movidius)
Click here for larger image



While the goal is ambitious, industry analysts were quick to point out that Movidius’ Myriad 2 VPU certainly isn’t the only device for running neural networks at the edge in embedded systems.

Plethora’s neural accelerators
Jim McGregor, principal analyst at Tirias Research, said “Technically, you could or would use any board that has a processing element and is intended to run a model.” He explained, “Machine learning/AI models already run on a wide variety of processors and SoCs, especially in the mobile segment.”

A case in point, said McGregor, is Qualcomm-enabled image recognition on the Snapdragon family, beginning with the 820 using a model developed by Qualcomm. “The Snapdragon is essentially the inference engine,” McGregor said. 

Processing solutions that feature parallel processing elements like GPUs, DSPs and FPGAs are well suited as inference engines. Many of the customized silicon solutions under development are using DSPs or FPGAs embedded into an SoC, explained McGregor.

Linley Gwennap, principal analyst at the Linley Group, agreed. In a recent editorial published in Microprocessor Report June 19, Gwennap wrote that Qualcomm, Apple and Intel (Movidius) are all “creating a new product category: neural accelerators.”

Explaining that the demand for these client-based accelerators is coming from self-driving cars, which demand as little latency as possible, Gwennap noted in his editorial the new technology that handles the processing locally “will trickle down to less expensive applications.” He predicted, “In consumer devices, a small neural accelerator is likely to be a block in the SoC, much like a graphics or image processor. Several intellectual-property (IP) vendors offer such accelerators, minimizing the cost of the extra hardware.”

Next page: AI-driven embedded applications?


PreviousKumu Preps Full Duplex Chip EE Times
Next    NXP to Spend $22 Million to Expand U.S. Fabs EE Times