Quantcast
Channel: Cadence Blogs
Viewing all articles
Browse latest Browse all 6660

Six Degrees of Freedom: Embedded Vision Summit 2017, Pt. II

$
0
0
Embedded Vision Summit 2017 is now over, and my notebook is full to overflowing with notes and my phone is full of bad photos of amazing content. If I had to pick one adjective to describe the show, it might be “motivated”. We all know that machine learning is growing in every possible way something can grow; some might call it exponential growth, some might call it cancerous — but regardless of the metaphor, some specific challenges that must be addressed. And everyone I encountered — presenters and attendees and exhibitors and staff alike — is motivated to take it on. With the emergence of a new technology, there always come with it new acronyms! We all like to sound like experts, so here’s one that was new to me: 6DOF . Six degrees of freedom. That is such an evocative phrase, isn’t it? (Turns out it’s not new, it’s just new to me.) What it means is “the freedom of movement of a rigid body in 3D space”; in other words, three degrees of freedom are the position of an object in space (its X, Y, and Z coordinates). That positions the body but doesn’t orient it. The other three degrees are how much the body is rotated in the three directions, about the X-axis, abut the y-axis and about the z-axis. (You can actually have more degrees of freedom than six degrees. For example, your hand has more, since you can bend your elbow, rotate your forearm, rotate your wrist, and so on.) And 6DOF is all the ways a HoloLens or any AR/VR technology must be able to sense and/or mimic users’ movements. It’s also all the ways my thinking had to be expanded to make sense of all the things I have seen at this Summit. Cool. I closed yesterday talking about satellites; I’ll now talk about something a little bit closer: the datacenter and the edge. This was the breakout session that filled the 3:00 slot in the Business Insights track on Monday. If data is the new oil, why are we wasting so much of it? Derek Meyer, the CEO of Wave Computing, asked this question. As training networks become more data-intensive, we have discovered lots of little ways to cut down on extraneous, unnecessary computations. But, as was made clear in practically every session I attended, working with neural networks face the same basic challenge: How do we handle the unthinkable amount of computation required? Training these networks can take ages — sometimes literally months — and the current hardware to do it in the cloud (in the form of datacenters, which is “owned” by IT, which opens a completely different can of worms) is simply not sufficient. While datacenters allow for oodles* of data to be gathered, processed, taught, manipulated, compressed, and stored, there is simply more data than any datacenter can manage. Some version of this graph on the right appeared in so many presentations and sessions, that I stopped trying to keep track. No matter how it is presented, the point is that the data is exploding. How do we address this issue? Derek’s answer, at least for now: Push machine learning to “The Edge” as much as we possibly can. What is the Edge, you ask? It’s the processing that is embedded into the end device. If training can happen there, maybe this computation limitation ceases to be a problem. But if we don’t have enough processing power in a datacenter, we certainly don’t have enough space in each individual edge device to perform the tasks required in a deep learning scenario. This limits what we can push to the edge to only the inference stage of deep learning. (Nvidia has a nice graphic that summarizes the differences between the two: in short, inference compresses the layers required during the training stage into a single layer to apply what the system has learned to new data it hasn’t encountered before. It’s kinda like data compression for an image: designers make million pixel-wide images, but when posting it online, they’ll turn it into a jpeg with only 72dpi.) So — where is the innovation happen to make this possible? It all lies in customized architectures. (Funny, that. This is exactly what the newly announced Cadence Tensilica C5 DSP is. But more on that later.) When we learn to follow the data, the “Art of the Possible” becomes… possible. After attending this session, I was starting to notice a theme emerging. It seems that everyone agrees on two fundamental things: Machine learning is growing in every possible way, and The next innovation to handle the unbelievable amount of data required to make it go must happen and happen soon. How ‘bout that. How to start an embedded vision company Speaking of the business track, I decided it would make sense to sit in on Chris Rowen’s talk right after Derek’s, about the entrepreneurial side of this business. Chris Rowen has started quite a few businesses of his own , so he knows what he’s talking about. What he covered was the case for pretty much any start-up. You must: Have a great team with broad and deep skills, experience, and character Have a unique, feasible, and cutting-edge product Have a receptive market Have a network to get the word out about it Get the product out fast Test prototypes and customers early and often Be reasonable about accepting funding Be realistic about progress of the market and of your product Leverage all the free help you can get What isn’t the case for every startup are the points he covered about embedded-specific startups. The thing about this market is that this new age of “cheap pixels” is what has triggered this vast emerging technology. Remember that “vision” is more than just the visual spectrum; it can be the entire electromagnetic spectrum, not to mention lidar, audio, microelectromechanical systems (MEMS), and we’re already seeing changes in business models, not to mention the fact that sensors are even replacing cameras in more applications that I can even begin to list here. The opportunities in this field must address the problems that we’re facing, particularly in terms of managing this data. For any new business that enters the field, for any solution that leverages the unthinkable amount of growth in this technology, for any solution that will lead us into this brave new world… the people driving it must do three things: Develop better algorithms (to speed up training and minimize the computation required) Utilize the data from existing streams better Pick hardware platforms that meet the target cost and power usage for each application (hello, Tensilica C5 DSP ) Here’s the thing: the era of all-purpose processing is over. How we get there, though? The door is open. More tomorrow, when I try to fit all of Day Two into fewer than 1000 words… not sure if I can do it… —Meera --------------------------------------------- * This is a highly technical term.

Viewing all articles
Browse latest Browse all 6660

Trending Articles