Recently Cadence held a worldwide event for our interns. To read more about our intern program, see my post CHIPs in the Cadence Cafeteria . The number varies almost on a day-to-day basis, but when I wrote that post we had 259 interns at 13 different Cadence sites. One of the most requested things the interns wanted to hear about was machine learning (ML), so David White came and talked about what Cadence is up to. However, since he was talking to interns at the start of their career, he decided not just to talk about technology, but also how he ended up working in the area, since he has been in it pretty much since the field really got going in the late 1980s and early 1990s. One strong message is that a field like deep learning is not just about technology, in practice, it is about the people that you meet along the way and the serendipitous way that some of those connections turn out to be important. If I look at a lot of the innovation we’ve worked on over the year, it's not just the innovation itself, it's the people you met and interact with that leads to the ideas. A lot of time the technology comes out of the people, and even then it is never right first time. So it is also about how you pivot and make it work the fourth time. You never know who will turn out to be important. David started his career working in the new aircraft division of McDonnell-Douglas (MD), which was roughly the MD equivalent of Lockheed's famous skunkworks. They worked on many innovative technologies, many of which never saw the light of day. But a lot of the stealth technology came out of that, along with hypersonic aircraft, and vectoring thrust for fighters. One area that David worked on was reconfigurable flight control for planes that either had mechanical failure or battle damage. This was motivated by the story of an Israeli pilot who lost part of a wing and had to manually reconfigure the flight surfaces to stabilize flight. That gave them the idea to use the flight computers to learn what was going on and stabilize it. So they looked at ML to do that. Another project where David was working was laser-based machine tools for making thermoplastic parts, which also used ML to control the process. David was at MD from 1989-1992. While he was there, he met Paul Werbos, who was the inventor of back-propagation in neural networks (NN), which had been the topic of his 1974 PhD thesis. He was then running the National Science Foundation's AI program. David and his colleagues talked to Paul about having a conference since at that time most NN conferences were very academic and they wanted to have a conference focused on industrial applications with feedback learning in real time. Paul suggested they organize a large workshop and MD bought into it all the way up to the CEO—who said that it should be held at MD facilities. So they put on the first Aerospace Applications of Neural Networks conference. NSF funded travel for people to attend the 3-day event. Even President Bush's science adviser came in, so there was lots of government interest. There turned out to be so much good material that David and others decided it would make a great book. So they put together the Handbook of Intelligent Control, which was the first textbook on reinforcement learning, with a focus on real-world problems like flight controls and manufacturing controls. In 1992, David was invited to join MIT's legendary AI lab. As David said: It took me about five seconds to say yes even though I already had a great job. Marvin Minsky, who had co-founded the lab with John McCarthy (the inventor of LISP) in the late 1950s, was still there. David joined initially as a visiting scientist with plans to apply to grad school there. He did research with Chris Ackerson and Mike Jordan, at the time two of the leading professors in NN and brain cognitive science. Mike was one of the authors of the book that got David interested in the field in the first place, Parallel Distribute Processing , although more commonly just called Rumelhart and McClelland. David's office ended up being between Mark Raibert (who would go on to start Boston Dynamics, now part of Alphabet/Google) and Andrew Moore (who also recently joined Google as their head of AI). David applied to grad school at MIT and worked with Duane Bonnie, who had the vision that semiconductor manufacturing and processing was in dire need of machine learning and sensor fusion. He offered David a position as an RA. In the middle of doing his masters and PhD, David founded three companies, which, as he put it, "some were successful and some not." One produced NN and ML learning controls and software fusion for defense customers. It was profitable from year 2, but after 6 years he left and returned to grad school full time to finish his PhD. He was working on ML and AI to improve various sensors and controls to do with plasma etch and CMP (chemical-mechanical polishing). This was just after the introduction of copper interconnect which requires CMP as part of the dual-damascene process unlike earlier metallization technology (mainly aluminum). Etch, deposition, and CMP are all interrelated and have pattern dependencies, so the whole group was working on modeling the way everything interacted. His office-mate was Tabor Smith, who started Pergsaugus. David graduated shortly after and joined as CTO in 2001 after finishing his PhD. It was very successful and was acquired by Cadence in 2006. Once David arrived at Cadence he initially continued work on CMP but was ready for something new. Cadence had a sort of skunk-works project to create a new shape-based router called Catena. This was not even on the Cadence campus, but was in a semi-secret office in Los Gatos above what was then California Cafe. They had some ideas about how to do in-design electrical verification, but to do that they would also have to do in-design extraction, which is tricky. One idea was to use a field-solver to train a NN engine to give accurate estimates. Leon Stok of IBM was intrigued and thought it would make a good project on which to work together. So in 2009 Cadence and IBM put together a collaboration, and it still is running today. Cadence/Catena ended up developing an in-design extraction engine for IBM that they have been using ever since. Catena eventually got folded into Virtuoso and David ended up talking to Glen Clark and Tom Beckley about what they had been doing. That was the genesis of Virtuoso EAD. MAGESTIC The program that became MAGESTIC actually started before any meetings with DARPA. David, Glen Clark, and some others were looking at how ML could be transformative both inside and outside, starting way back in 2010. Glen managed to provide some headcount for interns, and they started building up concepts on how to use ML for a number of applications, starting around 2016. Other things happened in parallel, such as similar efforts in the package and board space, and a collaborative partnership for ML with NVIDIA in 2017. Rolling this all up because the basis from which we built the concept that we proposed to DARPA. The program is really about building adaptive systems, with a lot of emphasis on how we verify those solutions. Probabilistic machine learning and explainable AI are a lot more valuable than just telling designers what to do—it is much better to explain why. This technology is very general and can be used by other teams and product groups. In fact, it is generalizable outside what Cadence does, such as mechanical CAD. In all these cases there are lots of repetitive decisions and tasks that people do every day and which we can do a better job of automating. It also makes it possible to move away from one-size-fits-all and tailor the design process and tools for different types of designs. I'm sure I'll be writing about this program in future Breakfast Bytes blogs, so watch this space. Sign up for Sunday Brunch, the weekly Breakfast Bytes email.
↧