Quantcast
Channel: Cadence Blogs
Viewing all articles
Browse latest Browse all 6678

ENNS 2017: Deep Learning, the New Moore's Law

$
0
0
One of the hottest areas in systems right now is deep learning: neural networks, machine learning, machine vision, convolutional neural networks. There are, obviously, differences between these, depending on the application, but one thing they have in common—the speed of development over the last couple of years has been staggering. A big driver of this rapid development has been the vision processing required to make autonomous vehicles a reality. (And let me remind you that it is only a decade ago—11 years, I think—since the best any vehicle could do in the DARPA grand challenge was eight miles. A couple didn't even get out of the starting corral, even though one was a self-driving motorbike—what could possibly go wrong?) It is not just machine vision that has improved by leaps and bounds, too. Voice recognition, language translation, and other high computational systems requiring learning functions also improved. Training is the new programming, and it is this that has driven things so fast. Last year, Cadence hosted an embedded neural network summit. (To see videos and slides from that day, they are available here .) We also ran a half-day tutorial, Power Efficient Recognition Systems for Embedded Applications, during the Computer Vision and Pattern Recognition (CVPR) event in Las Vegas last summer. I wrote four or five posts based on the day’s presentations—the first is here and there are links at the bottom of each page to take you to the next one. ENNS 2017 is Here: Deep Learning, the New Moore's Law This year, the embedded neural network summit (ENNS) is back, to be held on February 1st (all day). The “E” in the title is important. Autonomous vehicles and other systems typically have their operation partitioned; in other words, the “training” for neural networks to develop the required parameters takes place in data centers in the cloud. After they are developed, they are then uploaded into the embedded system (such as the car or mobile phone). While it would be incorrect to say that power is generally not an issue in data centers, it is accurate to say that algorithms developed in the neural network are not as power-constrained in the cloud as they are in vehicles and phones. Just as last year, the summit brings together experts in the field. Once again we are joined by Chris Rowen, who used to be the CTO of the IP group at Cadence and still consults back with us as he sets up his new venture. This really isn't about selling you something (although we do have a Tensilica product in the space if you insist!). The current speaker lineup is as follows: 9:30 am Overview of the Day Chris Rowen, CEO, Cognite Ventures 9:40am When Every Device Can See Jeff Bier, Founder, Embedded Vision Alliance 10:10am Kunle Olukotun Professor, Stanford University 10:55am Break 11:15am Dr. Kai Yu Founder and CEO, Horizon Robotics 11:45am What Would It Take for CNN to Go Embedded? Samer Hijazi, Group Director and Senior Architect, Cadence 12:15pm Lunch 1:30pm Han Song Ph.D. Candidate, Stanford University 2:00pm Neural Networks: The New Moore's Law Chris Rowen, CEO, Cognite Ventures 2:30pm Architecting New Deep Neural Networks for Embedded Applications Forrest Iandola, CEO, DeepScale 3:00pm Break 3:20pm IoT – Embedded Vision and Embedded Intelligence Ren Wu, Founder and CEO, NovuMind 3:50pm Targeting CNNs for Embedded Platforms Anshu Arya, Solution Architect, MulticoreWare 4:20pm Wrap-Up Chris Rowen, CEO, Cognite Ventures 4:30pm Panel Discussion 5:30pm Reception Building 10 lobby As with all events, this program is subject to change. Register More details, including a link to register, are here . (It's free!) Previous: RISC-V "The thing that you learn and the thing that you use are the same"

Viewing all articles
Browse latest Browse all 6678

Trending Articles