Quantcast
Channel: Cadence Blogs
Viewing all articles
Browse latest Browse all 6701

Embedded Vision Summit: Focus on Autonomy and Recognition

$
0
0

We often think of better electronic systems design in the context of the improved power, performance, and area of its various components, from generation to generation.

But some eras introduce new technologies that take electronics systems (and engineering thinking) to a different level. Such is the case with embedded vision.

Google driverless car

A car is a car is a car. But a car with embedded vision is a safer, smarter, more autonomous mode of transport, and, down the road, a transformer of society. Vision is the sensor functionality that adds the third"C" to the convergence triad: Computing, communications, and cognition. And embedded vision is, at its heart, about recognition and autonomy.

Those are two cable-strong threads running through this month's Embedded Vision Summit in Santa Clara, Calif., May 29, at the Santa Clara Convention Center.

"This is about inspiring and empowering engineers to incorporate visual intelligence into products," said Jeff Bier, founder of the Embedded Vision Alliance, which created the summit. "It's not technology for the sake of technology.  It's technology for the sake of better products."

Bier has lured two amazing speakers to the packed compelling lineup, Nathaniel Fairfield of Google, who will talk about autonomous vehicles, and Yann LeCun, Director of AI Research at Facebook and Silver Professor of Data Science, Computer Science, Neural Science, and Electrical Engineering at New York University. His presentation is titled "Convolutional Networks:  Unleashing the Potential of Machine Learning for Robust Perception Systems." I'm going to listen in just because I love the phrase "convolutional networks!"

Also featured among the 18 technical talks during the day-long program:

  • Cadence Fellow Chris Rowen ("Taming the Beast: Performance and Energy Optimization Across Embedded Feature Detection and Tracking")
  • Francis MacDougall, Qualcomm ("Vision-Based Gesture User Interfaces")
  • Ken Lee, Van Gogh Imaging ("Fast 3D Object Recognition in Uncontrolled Real-World Environments for Embedded and Mobile Applications")

Here's a link to the complete conference agenda.

Whether it's self-driving cars (Fairfield's focus) or facial recognition in social networking (LeCun's focus), Bier notes these are "hard problems" for engineering teams.

artificial intelligence

Autonomous vehicles in particular, he said, are systems that need "to exist in the wild, in this chaotic world in which we live."

Facebook, for its part, is likely interested in expanding facial recognition technology to make the user experience more intuitive but also to make advertising programs smarter and more effective. ("Are you wearing a Real Madrid jersey in your latest Facebook picture? Here we'll serve you some merchandiser links that are relevant to that.")

Bier said Facebook hasn't said much about what LeCun is working on, so his presentation should be illuminating.

And so will the rest of the program.

Here's a link to the registration page.

 

Brian Fuller

Related stories:

Embedded Vision's Transformative Potential

 


Viewing all articles
Browse latest Browse all 6701

Trending Articles