Quantcast
Channel: Cadence Blogs
Viewing all articles
Browse latest Browse all 6664

CDNLive India 2017: ThinCi on AI, Machine Learning and Deep Learning

$
0
0
Last week’s blog was about Venu Puvvada’s keynote at CDNLive India. Today’s blog is about the second keynote we had (on Day 2 of CDNLive India) by Dinakar Munagala, CEO of ThinCi Inc (pronounced Think-Eye). ThinCi is a semiconductor startup which came out of stealth mode in October last year . The company designs vision processors that can be used in a wide range of applications, from self-driving cars to deep-learning supercomputers. The firm is backed by well-known institutional and private investors. Dinakar’s presentation was titled “Creating Real Business Out of AI, Machine Learning, and Deep Learning”. He believes we are in the early stages of AI and deep learning - their impact is already visible in autonomous driving, but it will proliferate to different industries and different walks of life. He cited several examples of applications that we are already seeing in everyday life. A few that he mentioned: Netflix knowing your viewing habits by applying deep learning algorithms, so that when you turn on the TV you see a program that is likely to be of interest to you. In the area of medicine, where a startup claims to be able to read X-rays in a fraction of the time of a human radiologist, and as accurately as a human expert. In automotive maintenance. He cited the example of a startup that has developed a specialized chip for maintenance scheduling by analyzing data about the vibrations in the car systems. Using that data, they train neural networks to understand what is going to fail within the next couple of weeks. The user is notified so that s/he can take proactive measures to get the issue fixed before it becomes dangerous. How does deep learning work? Dinakar said that the major focus now is on deep learning, a way that extracts automatically from raw data, recognizes features, makes sense of it and it creates a program to recognize things. For example, from the pixels of a picture, deep learning uses the data to detect edges, then recognizes object parts, then full images and then distinguishes whether the image is a face, car, elephant, chair or traffic sign. The key thing is that the bottom layers are the same. Why is deep learning taking off now? While the science has been there for quite some time, deep learning is significant, even disruptive, because it has the ability to analyze data and solve problems that were challenging or even impossible earlier. Dinakar said that there are three factors contributing to why deep learning is taking off now: First, it is very data driven and there is a lot more data available today. Data is needed to “train” neural networks to recognize objects. Today there are 200 million plus types of data – images, speech, Optical Character Recognition (OCR – refers to handwriting and printed text) – which is rich material to train neural networks. The availability of data is growing exponentially every year, and as a result there are new neural networks coming up year on year. Second, the availability of more powerful compute power at price points that are affordable. Because of Moore’s Law and other advances, more advanced architectures, purpose-built architecture for deep learning with really powerful hardware in a small silicon footprint is now a reality. Lastly, improvements in the software itself. Better neural networks with the error rate in image recognition error rate now surpassing the human eye in accuracy. Using powerful computers and rapidly progressing neural networks, new technologies can now safely be deployed in a car and other mission-critical areas. S ome AI applications of the future The final part of Dinakar presentation was about some AI applications that are at the cutting edge today. A few examples: The Amazon Go convenience store in Seattle, USA, where you walk in, pick up what you want and just walk out. It is “automated shopping”, where the credit card is automatically charged without any checkout counter or lines. Intel’s smart shelf concept where automatically, as items need replenishment, a notification is sent to the manufacturer and stocks are replenished. Amazon Kiva robotics, where robots are used in fulfilment centers to fetch items and bring them to a central place where they are sorted and boxed for shipment. There are algorithms doing path planning to find the most optimal route to pick orders from shelves. Even a small percentage saving in the time it takes, multiplied by billions of transactions, is a pretty huge number. Read this article to know more . Dinakar closed his keynote by talking about the challenges in autonomous driving. The most challenging autonomous driving tasks are: (1) Sensing, where there are different sensors to get data; (2) Perception, where you are in a Lidar point cloud, or, based on multiple cameras, object recognition and tracking; (3) Decision-making and path planning - how do you overtake a car, how do you avoid hitting a pedestrian, etc; and (4) Manipulation, which controls propulsion, steering and braking. Out of these, perception and decision making are the most challenging and compute-intensive activities, and the ones where deep learning will have the most impact, he said. Here are three key takeways from Dinakar's talk, in his own words. (Please visit the site to view this audio)

Viewing all articles
Browse latest Browse all 6664

Trending Articles