Quantcast
Channel: Cadence Blogs
Viewing all articles
Browse latest Browse all 6678

Seamless Verification

$
0
0
At DAC, Cadence had their now traditional verification lunch. Brian Fuller returned in his role as the host and moderator. The panel consisted of: Jim Hogan of Vista Ventures, another Cadence alumnus, and an expert on the EDA industry in general (and almost on his honeymoon at DAC since he recently married) Narendra Konda of NVIDIA, in charge of what Brian described as the "largest emulation lab in the world" Alex Starr, a fellow at AMD, another company designing some of the largest chips in the world Mike Stellfox, a Cadence fellow and our home team expert for the day The panel started with Jim giving a presentation on the three-legged stool of verification. The three legs are simulation, in all its various forms, hardware (such as emulation and FPGA prototyping), and formal techniques. The big challenge is not just that chips are getting insanely complex, but there is a huge software load, too. Jim quoted Ran Nair of Ford who pointed out that there are more lines of code in a mid-priced Ford Fusion than in a Boeing 777. One of the challenges with verification is that you need to have what Jim calls COVE, a continuum of verification engines. Early in the design process, there are no detailed representations. Obviously, you can't do RTL verification before the RTL is written. Late in the process, you can do lots of verification but it starts to be hard to make changes. So early on, you start with virtual prototyping. Then RTL simulation as the RTL (typically SystemVerilog today) is created. Emulation then comes into play since it is relatively easy to bring up a design these days. That wasn't the case a decade ago when it could take literally months to bring up a design on the emulators of that era. Short of actual silicon, FPGA prototyping has the highest performance but only makes sense once the RTL is final, or very close to it. Another change is the growing importance of automotive semiconductors, with their much stricter reliability requirements. Can you say ISO 26262? That means that we need to up our verification game to a new level. To kick off the panel proper, Brian asked Alex to give us all a sense of what he sees as the landscape of engines. Alex said that at AMD they have everything but it is a challenge to know what to use sometimes. Formal is good for some sorts of blocks. Emulation can't be beat but it is expensive. Simulation is struggling to get fast enough. It can handle analog blocks, which emulation cannot, so has that going for it. Being able to combine engines is a big investment and took a long time to become a reality, but now it is easier to swap blocks into different engines on a whole SoC and so shave months off the project. Narendra talked about how verification has changed at NVIDIA. Thirty years ago, they would provide a bit of silicon and a little firmware. They called these video accelerators. But today NVIDIA doesn't just ship silicon, there are millions of lines of code that go along with it. So verifying all that is the challenge. There are holes in the continuum of verification engines, it isn't as smooth as it could be. Mike gave the Cadence perspective. He and his team have to look at how to combine engines and methodologies. The biggest issue now is software since chips need to be verified against the real software load much earlier in the design process and that requires higher performance engines like emulation and FPGA prototyping. Automotive is another driver requiring more stringent verification but without adding people. One challenge is to keep track between the engines. For example, formal is so effective when it is appropriate that we want to use it, but we also want to get credit for it so that the work doesn't need to be repeated on another engine. Narendra talked about the effort of bringing engines together. They needed to work with Cadence to get virtual platforms and emulation working cleanly, so that they could run the software using virtual platforms, and run the RTL in the emulator. Fusing the tools took a year and a half. NVIDIA's move into automotive is a challenge, too. "Selling a graphics chip to a teenager doesn't need much reliability, but that can't happen in a car. There is no productive tool at the moment to verify all the safety conditions." They need to bring emulation and FPGA prototyping together with automotive safety verification, and that will take another 10 months or so. Everyone agreed that one of the biggest holes is that lack of standards. Bringing together multiple heterogeneous engines (not necessarily from one supplier) is a necessity. As Narendra said, bringing just a couple of tools together takes a year. Emulation is too slow but FPGA bringup takes too long. Narendra said he's seen one example where bringing up the final design in an FPGA environment took longer than getting the chip fabbed. We need a tool he called EmuPro which allows to get up and running instantly but that runs at 4MHz, much faster than the 1-1.5MHz you get with an emulator. Cadence is working on a multi-fabric compiler that allows to target to an emulator with visibility but slower speed, or to Protium prototyping platform where you get speed at the cost of limited debug visibility. They also want to make the user experience much more seamlessness, especially going in with a high-level verification plan to get started, but be able to bring the metrics back into one place. Alex pointed out that you may think emulators are fast, but they are not fast enough for software development. AMD started to use virtual platforms in house along with emulation and got a 200X performance boost versus just emulating the processor. Now Cadence has a production solution. But just go to the software teams and they complain it is still too slow. They can't click on the mouse and move things around like on a real PC. There is a culttural problem, too, since software teams are used to stable environments and don't want to get involved with debugging the hardware. But shift-left means they have to get involved earlier. Mike pointed out how software verification is changing. It used to just mean booting Android or Linux. That's a good test but not very thorough. We need a more systematic approach to software-driven testing, soft of like we did with constrained random but on faster platforms. Narendra emphasized that software-driven verification is a necessity. "We used to get a bunch of guys to write directed tests," he said. "But there are only so many people that can write those tests, so instead we need to run the whole software stack to make sure there are no bugs in the silicon." But he admitted it is hard to find silicon bugs with such a heavyweight software load. Alex agreed. "The x86 BIOS is non-trivial." So we need to get the more efficient engines in use earlier in the schedule. Portable stimulus will help, since we can hopefully boil big workloads down into something less cumbersome. Brian wrapped things up by asking each of the panel members for one piece of advice: Jim emphasized the economics. How do we afford it? Especially for automotive that requires us to find an economically feasible verification solution Alex said the biggest challenge is to get teams all working together, and pushing vendors to fill all the holes."Narendra and I have spent a long time working on this stuff at the leading edge, so everyone else should take advantage of it" Narendra said you need to pick the best point tools for specific tasks that server the teams well, rather than necessarily going with a specific vendor due to economics Mike agreed, and said that Cadence's goal is to provide the best-in-class tools for each problem. Take another look at formal. Constrained random is great, but you can find bugs you are missing with formal Previous: An Steegen: Controlling the Semiconductor Funnel

Viewing all articles
Browse latest Browse all 6678

Trending Articles