If I had to summarize DARPA's Electronic Resurgence Initiative in one phrase, it would be "getting the cost of design down." As I've said several times this week, the US Department of Defense (DoD) does not have high volumes and so the cost of a part is dominated by design cost, not manufacturing cost. One area that ERI is very interested in is open source. There is one program, POSH or Posh Open Source Hardware. Yes, it's a recursive acronym. There was a lot of background on DARPA's interest in open source in Linton Salmon's keynote at the RISC-V workshop last year. He is one of the program managers at DARPA, although it looks like POSH is under Andreas Olofsson, not Linton. Anyway, I wrote a post about it at the start of this year, Open-Source IP in Government Electronics . It's not specifically about POSH, or even RISC-V, although it touches on a lot of DARPA's motivations. During that presentation, Linton really brought home just how low the DoD production volumes really are when he said that he'd never seen a DoD project where the production volume was more than the samples they would send out when he worked in the commercial world (his background included stints at both TI and AMD). Andrew Kahng and Cadence One of the programs funded as part of ERI is OpenROAD, Foundations and Realization of Open, Accessible Design. The lead is Professor Andrew Kahng of UCSD. When I was last working for Cadence in the late 1990s and early 2000s, we were both members of the Cadence Technology Advisory Board, which met quarterly (although Andrew was at UCLA in that era). You can read his opinions on some presentations he gave at Cadence in the last couple of years in my posts Andrew Kahng on PPAC Scaling Below 7nm , Andrew Kahng on the Last Semiconductor Scaling Levers , and Andrew Kahng on Industry-Academia Cooperation. At the end of the first of those 3 posts, which date from 2016, Andrew wrapped up with: Andrew's final call to arms was for a massive "moonshot" to predict tool outcomes, find the sweet spot for different tools and flows, and thus design in specific tool and flow knobs to the overall methodology. This would combine all the ideas already discussed and so end up with a fully predictive, one-pass flow with optimal tool usage. With modern massively parallel, big data architectures, it is not unreasonable to use tens of thousands of machines if it could "get us to the moon" of a non-iterative flow. OpenROAD In some ways, OpenROAD is that moonshot program. But to make it more challenging still, it will be based on open-source tool flows. Andrew believes: We are stuck in a "local minimum" of quality, methodology, and tooling. With unpredictable optimizers, designers demand as close to flat methodology as possible—just a few big blocks, because we can’t lose any more optimization quality at block boundaries. So EDA developers keep piling on more heuristics to turn around those bigger and bigger blocks in a day. And then designers want to close the design quality gap—so they ask for more flexibility—a couple more commands, a couple more options—so they can work some better magic with those tools. After decades, and billions of R&D dollars, today’s place-and-route tool has more than ten thousand command-option combinations. We have poor predictability in design, which then leads to more iterations and turnaround times get longer. Poor predictability also means more guardbands. So achieved design quality gets worse. We need to flip all the arrows in the above diagram. The goal: 24 hours, no humans, and (eventually) no PPA loss. The techniques for this are to use extreme partitioning (instead of trying to do everything flat), parallel optimization, machine learning of tools and flows, and restricted layout. The tools must self-tune and adapt, and never get "stuck" unexpectedly. The 24 hour goal necessitates extreme partitioning of the problem, followed by parallel search on the cloud, with machine learning for predictability. So the OpenROAD future is: increase the number of subproblems, which changes how we partition, synthesize, plan, place and optimize. reduce flexibility through freedoms from choice in power distribution, clock distribution, and so on. better predictability: fewer iterations, and ideally single-pass design. shorter turnaround time better predictability and fewer iterations, in turn, means narrower guardbands, and so an improved achieved quality. Of course, there are major technical challenges to all this, as Andrew acknowledged. Test data is one of them ("there are a lot fewer CPU designs on the web than cat faces"). Humans are in the loop today for a good reason, just like in cars. But we need self-driving tools. Some tradeoffs, such as analysis cost vs accuracy, or optimization effort vs quality, will not go away. A big change, too, is mindset, to create an open-source ecosystem, and a sharing mindset. The end goal is tapeout-capable tools in source code form, with permissive licensing. The "Linux of EDA". Who Will Do the Work? OpenROAD brings together six universities (UCSD, Brown, Illinois, UT-Dallas, Minnesota, Michigan), plus Qualcomm and Arm. OpenROAD consists of two main teams, a design team and a tools team. The design team at Michigan generates 15 SoC tapeouts per year, they have graduated 70 PhDs, and presented 60 papers at ISSCC. The design team will drive the tools to do their designs, and so indirectly the tools team. The tools team (UCSD, Illinois, UMinn, UT-Dallas, Brown) have graduated about 150 PhDs and 80 Masters. They already have many tools and engines on the shelf from previous research programs. In addition, Qualcomm brings leading-edge SOC methodology insights, and they drive unified planning and codesign capabilities across die, package, and board. Arm provides the program with machine learning guidance, especially at the interface between system-level design and IP configuration that comprehends the downstream layout flow. And there is more, with contributions and cooperation from Avatar, Intel, Google, GlobalFoundries, and more universities: It Has Not Escaped Our Notice... Crick and Watson's paper announcing the discovery of the structure of DNA ended with a famous quote: It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material. Well, working for Cadence, it has not escaped my notice that the success of this project might be a threat to Cadence's business, certainly with the existing business model. Linux was laughed off as being just a "bunch of volunteers" but its success has been almost total in the server and supercomputer space. Indeed, as I said in my post on Supercomputers , the Top 500 list has a drop-down menu for the operating system, but it only has one entry, Linux. IBM, Oracle, HPE, Dell, even Microsoft, use Linux for cloud servers. Even if this project only succeeds for non-leading-edge designs (say, without the complexity of multiple patterning or high resistance interconnect) it could still be a commercial threat by draining off a lot of revenue. Anyway, I think the project is extremely ambitious. It has been the dream since the dawn of EDA to have tools that just run forward so well that iteration is rarely necessary. Of course, Cadence is not sitting back watching, we are also leveraging cloud computing and machine learning to work towards no-human-in-the-loop EDA flows. The Sound Track This project even has a soundtrack. Bryan Adams, Open Road. Doesn't he know it's spelled OpenROAD? https://youtu.be/WP7Oh8sSH6c Sign up for Sunday Brunch, the weekly Breakfast Bytes email.
↧