It's the start of a new year and that means it is the Consumer Electronics Show in Las Vegas. Although it is a zoo, it is a good place to get a feel for what is new in the consumer electronics space (and that increasingly includes automotive, not just TVs with more and more pixels, and new VR headsets, and so on). Getting around Vegas is a challenge, with 180,000 other people trying to do the same. One thing that amazes me is the long line for taxis. And by taxis, I mean real taxis. Meanwhile, you can summon Uber and Lyft to special pickup areas without any difficulty. The monorail goes to a lot of the strip. Here's another piece of advice if you want to use it. First, buy a multi-day ticket when you arrive so that you don't have to buy a ticket each day, or each trip (and you can save 10% by buying online, although you still have to exchange your barcode for a real ticket, you can't just flash the barcode on your phone at the turnstile). Second, if it is late in the day, it will be almost impossible to get onto the platform, let alone the train itself, at the convention center station. But if you walk a short distance through the halls, you come to the Westgate Hotel, which has its own monorail station, the one before the convention center. You can get on easily, have a seat even, and then sit smugly when 30 seconds later the train pulls into the packed convention center station. Even if you are taking Uber or Lyft, I recommend walking to the Westgate, and summoning them there, since it is easier for the incoming drivers to get to you, avoiding the traffic jams at the convention center itself. Cadence at CES Cadence used not to go to CES, but Tensilica did, since many of the companies exhibiting at CES are the type of companies that needed Tensilica DSPs (and IP in general). Since Cadence acquired Tensilica, they have been every year and have broadened a little to IP in general, but there is still a Tensilica focus. Tensilica is great for audio, video, neural network processing, and more. Well, CES is all about audio and video, and adding intelligence to everything. If I listed everything that Cadence is showing at their booth ( MP25777, South Hall 2, ), this would be pretty boring. So let me pick three things, one audio, one video, one neural network processing. Dolby Atmos on PC (Audio Processing) At CES last year, Cadence and Dolby announced Tensilica inside regular TVs providing Atmos sound. See my post Dolby Atmos and Tensilica . For weird reasons I don't understand, we were not allowed to say which manufacturer was incorporating this technology, despite the fact that you could go to their booth and listen to it. Dolby Atmos was originally introduced for theaters, and then home theaters. If you saw Gravity , or La La Land in the cinema, you heard Atmos. Of course, commercial theaters, and even home theaters, have high-quality speakers. Typical home theaters have three at the front, two at the back, and a subwoofer, making the 5.1 speakers (the effects subwoofer is the 0.1) of Dolby 5.1 sound. Next, Dolby took the technology to consumer products for use in the home without speakers spread around the room. The "trick" of Atmos uses Head Related Transfer Functions (HRTF), which is a fancy way of matching the way the sound is delivered to match the way our ears and brain function. Or, as Dolby themselves put it: Head Related Transfer Functions, HRTFs, are measurements of how sounds are modified by the head, torso, and external ears as they make their way to our individual eardrums. Sounds from different directions are modified in distinctive ways, giving our hearing system some clues about where the sound is coming from. However, most of the directional information comes from differences in the sounds arriving at both ears, the inter-aural amplitude and time differences as functions of frequency. This year, some of the newer PCs include Tensilica running Atmos. The normal thing to say is that "you have to see this to believe it," but in this case, "you have to hear this to believe it." Running on some laptops that you can just go and buy, and using just the built-in speakers that such PCs come with, the sound seems to be completely three dimensional, moving around as you listen, even seeming to come from above despite the speakers being on the desk in front of you. It is the most impressive audio demo I can remember. You get a lot of the same effect as having five speakers with just the two speakers built into the laptop. The lead customer using Atmos on Tensilica under the hood is Huawei’s Matebook X, shown above and which you can listen to in the suite. Dr Ray Dolby, the eponymous founder, was American, but after being an undergraduate at Stanford, he received a Marshall Scholarship study in Britain, and founded Dolby Laboratories there, before moving it to the US. He died a few years ago. In a weird coincidence, just as I was writing this post, I received an email newsletter from Cambridge University saying that the estate of Ray Dolby had given the largest bequest ever to UK Science, more particularly to Cambridge's Cavendish Laboratory where Dr Dolby received his PhD. The Cavendish Laboratory (the name for the Department of Physics) used to be on the New Museum Site in Cambridge, as is the Computer Laboratory, so I spent a fair bit of time studying a short distance from where Dolby had presumably studied many years before (and where the electron, DNA, the neutron, and more were discovered). Dream Chip Automotive (Video Processing) It is hard to create convincing demonstrations of Tensilica products at full performance. Cadence can't manufacture a chip just for a demo, so usually we put the processor into an FPGA. However, last year, Dreamchip created an automotive SoC. Dream Chip is a company based in Germany (near Hannover) and they presented their design at CDNLive in both Silicon Valley and Munich last year. I wrote about it in Dream Chip, a Vision for Your Car. The Dream Chip chip contains multiple Tensilica Vision P6 DSPs. The chip processes input from four different cameras and combines them together to get an all-around view of the environment of the vehicle. A real car with real automotive cameras would be a bit impractical in the suite, so a model car (well, okay, a blue rectangle) with GoPro cameras provides the video feeds showing the surrounding vehicles. The chip stitches the images together just like in a real car. The picture above on the left shows the setup, on the right shows the image stitched together from the four cameras (in real time). Inception and Mobilenet (Deep Learning) You have to have been living under a rock, if you haven't noticed that deep learning is becoming a key technique for a wide range of applications. Internally, most of these use neural nets. For those that are cloud-based, such as voice recognition in Alexa, or automatic classification of photographs, the algorithms run in the cloud (sometimes on special hardware such as GPUs) using 32-bit floating point. However, a lot of applications need to run on edge devices, even if the training is done in the cloud. Autonomous driving is an obvious example where, for latency and reliability reasons, much of the identification (is that light red or green?) cannot be offloaded. One of the keys that has helped unlock a lot of algorithmic development over the last years was the development of ImageNet in 2009 and onwards. Although the name makes it sound like a neural network, it is actually a collection of photographs with classifications. Originally, it was just a below-the-radar poster session at the 2009 Conference on Computer Vision. It now has over 10 million images in its database, along with classification data at varying levels of details (not just a dog, but the breed and the color). This huge training dataset made it possible for algorithms to compete by working on the same data and comparing what, in EDA, we would call QoR (quality of results). Two of the neural networks that do especially well are Inception and the more recent Mobilenet. These were created by Google and are freely available, and so a good choice for demos. The Tensilica family of DSPs are especially suited for image-recognition tasks. Last year, at Mobile World Conference, Cadence had a demo of Inception running on an FPGA containing a synthesized processor. This year, the same Dream Chip SoV containing a Tensilica Vision P6 DSP is used to identify images. It is not straightforward to take a "standard" neural net like Inception and put it into an embedded recognition engine since simply running the full-resolution model with 32-bit floating point everywhere is too memory and power hungry. For more details about optimizing networks to use less precision without giving up recognition accuracy, see my post from last year CactusNet: One Network to Rule Them All . Cadence automates the whole process of getting a network into the Tensilica Vision processors with the Xtensa Neural Network Compiler (XNNC) flow. This takes the user code for the neural network, and compiles and optimizes it. This is then combined with specialized libraries to create an optimized implementation. The diagram above shows the flow. A huge amount of power can be consumed moving data around, so getting that right is an important aspect. For example, Google's specialized TPU requires more power on the x86 server cores that feed the machines than the TPU engines themselves require to do the inference. On-chip, optimizing buses and DMA is as important as minimizing the power consumed in the MACs themselves. The demo at CES shows both Mobilenet and Inception. Images are shown to a Sony camera looking at a laptop, and the recognition is performed and displayed on the main screen. Sometimes the recognition is almost certain what is being shown, other times there is a most likely identification and then others (with their probabilities). For example, "tiger 85%, cat 10%, dog 5%". The picture at the start of this section shows the setup. Learn More The best way to learn more is to book an appointment with us and come and talk about your specific needs. Or you can come by and see the three demos that I just described, and more. Once again, it is booth MP25777, South Hall 2. Find the drones and keep going (really, we are literally behind the area where all the drones are). Sign up for Sunday Brunch, the weekly Breakfast Bytes email.
↧