SANTA CLARA, Calif.—Jeff Bier has some good news and less-good news when it comes to embedded vision applications. “The good news is there is an enormous and growing diversity of processor choices for vision and that helps increase the odds of finding something that fits with your application,” Bier, the founder of the Embedded Vision Alliance , said recently. “The bad news is, there are lots of diverse choices and they are changing almost every week. Making sense of it all is a challenging problem.” Bier ( pictured right ) offered his state-of-the-sector thoughts in his presentation here at the annual Embedded Vision Summit (May 12). In some respects, it is a riches of choice that make embedded vision systems design a great challenge today. There are plenty of processing choices, from CPUs to GPUs, FPGAs, and DSPs, each with its plusses and minuses for vision-processing support. At the same time, vendors are starting to roll dedicated vision processors to market for specification applications. Choices, choices… But at the same time, the segment is relatively nascent and, as such, standards, support, and tools infrastructure and benchmarking guidelines are either evolving or non-existent at the moment, Bier said. Starting with the engines, Bier noted that GPUs are generally seen as the vision coprocessor of first resort because they’re often already on the chip and programmers know how to use them and leverage established libraries, such as OpenGL. But GPUs were designed for 3D, not for vision and as such are not “exceptionally efficient” for such applications, he said. DSPs have decades of history and are optimized for streaming media, including some computer vision algorithms. “But most DSPs have moved to low-power audio and speech applications or as co-processors for mobile phone processors,” spots that leverage energy efficiency but don’t put out a lot of performance. Bier also described use cases in which SoCs combine a CPU with FPGA on the same silicon, which allows engineers to build a custom vision coprocessor using the FPGA logic. “Because vision algorithms are so diverse and specialized, that can be really appealing,” he said. “You can create a coprocessor that’s absolutely tuned to your application.” But FPGAs are harder to use than other classes of more “classical” software-driven processors, he added. One really intriguing (but perhaps not surprising) trend is the growing number of vision-specific IP cores from various cores providers ( Apical Spirit and Cadence Tensilica, for example) but also application-specific processors being built for vision. He highlighted offerings from Analog Devices ( BF609 ), Freescale ( S32V ), Inuitive ( NU3000 ), MobileEye ( EyeQ4 ), Movidius ( Myriad 2 ), and Texas Instruments ( TDA3x ). He also noted that companies are beginning to take existing CPUs, GPUs, or a combination and wrapping optimized software frameworks around to target convolutional neural network applications. What’s under the hood is key But if there was a hammer-home point in Bier’s packed 30-minute presentation, it was that much work lies ahead in building out the infrastructure around hardware to really give embedded vision applications escape velocity. “Tools can’t be an afterthought,” he said. “These things are absolutely critical to the effectiveness and usability of a processor.” Compilers, development boards, development tools, optimized software libraries, APIs such as OpenVX , and programing languages all need to evolve to support embedded vision development, he said. This is because they’re as crucial to overall performance as the processor itself. “When you think about selecting a processor you’re thinking chip or IP,” Bier said. “But you’re buying a whole package—chip, compiler, drivers, API, training, ecosystem partners. This becomes more and more important as vision applications become more and more complex.” This, though, is both a challenge and an opportunity because there’s an “arms race” developing in software-development support, tools, libraries, and APIs, he noted. “Traditionally the hard part of processor companies has been the tens of millions of dollars required to design and fab the chip,” Bier said. That “shifting to the hard part is delivering the enabling software stack that customers require. No systems company can afford to build all this themselves. They need a chip company to provide most of it and then they build their application on top of it.” For more information, visit the Embedded Vision Summit site to get access to the proceedings. Brian Fuller Related stories : — Embedded Vision’s Transformative Potential — Google Driverless Car’s Sensor, Vision, and Computing Future
↧