Quantcast
Channel: Cadence Blogs
Viewing all articles
Browse latest Browse all 6660

Are We There Yet? Metric-Driven Signoff

$
0
0
Are we there yet? All verification suffers from the problem of trying to decide when enough verification has been done. It is not possible to exhaustively simulate everything on a chip and so completeness cannot be the criterion (exhaustion, however, is optional, usually on the part of the design team). At CDNLive in Bengaluru, Narender Kumar and Ravin Shah of ST Microelectronics (and Anshul Singhal of Cadence) presented on vManager Metric-Driven Signoff Platform for SoC Verification . SoC Verification Your SoC will, of course, not be quite the same as any other one. But probably it contains a multi-core processor and its associated software, some analog, some low-power architecture such as separate voltage islands or DVFS, dozens (if not hundreds) of IP blocks, network-on-chip, multiple clock domains, and other features that complicate verification. Some SoC verification metrics are: Regression results (which test cases have been run, which failed) Structural and functional coverage (toggle and code coverage, assertion coverage, functional coverage) Feature coverage (low power, clocking/resets, self-test) System-level use cases Bug trends (rate of discovery, severity of problems being discovered) A couple of other wrinkles that make the problem still harder are multiple verification platforms such as emulation and formal verification, not just simulation. Verification needs to be looked at as a whole, since a block checked with, say, emulation, does not also need to be checked with simulation. Plus, these days, most design teams are multi-site and, obviously, a block checked in Delhi does not also require being checked in Dallas. There also tends to be a built-in, but incorrect, assumption in discussing SoC designs, which is that the design is being done in isolation. But, in fact, SoCs are often derivatives in some way of earlier designs, with blocks being re-used. If a block is re-used, then all (or at least a lot of) the verification can be re-used, too. vManager Cadence's tool for managing the "Are we there yet?" problem is vManager (or the Cadence vManager Metric-Driven Signoff Platform to give its full name). Under the hood are the vPlanner planning center, the analysis center, the regression center, and the tracking center. In short, vManager contains the capabilities needed to do metric-driven verification (MDV). It is also integrated with the Cadence portfolio of verification tools: the Xcelium (simulation), JasperGold (formal verification), Palladium Z1 (emulation), and Protium S1 (FPGA prototyping) platforms. Traditional HDL-based verification consists of setting up the environments to run the tests, create the tests, and run them (okay, this is oversimplified but not much). With vManager, the first time it is used, there is not really a lot of saving since everything still has to be created by hand. However, now the way the tests are run can be tracked. The real gains come on subsequent projects when vManager makes it straightforward to reuse a lot of what was done. Over time, this easy reuse means that projects can be done in a much shorter overall time. See the diagram above. The heart of vManager is the planning center, since everything is driven from a plan (or vPlan in vManager-speak), the plan is executed, the plan is tracked. Like the advice about giving a lecture (you tell them what you are going to tell them, then you tell them, then you tell them what you told them), the verification plan lets you decide what you are going to do, then you do it, then you track how you did. There is a lot of detail in it that is inappropriate for a short blog post. The fact that SoCs are complex, involve different verification engines, different sites, and more, means that it has a lot of functionality and not all of the complexity can be hidden. Tracking Progress The above graph shows what you want to see as verification progresses. The blue line shows the tests run, the green are the tests that pass, and the red ones are the tests that fail. Over time, moving from left to right, the passing tests build up to 100% and the failing tests drop to zero (well, usually there are a few tests that for some fundamental reason are never going to pass, and need a waiver). You can go down a level to look at individual parts of the project such as blocks on the SoC, and see which have completed verification, which are currently failing, and which are still on the to-do list, as in the above screenshot. Summary Narender and Ravin's summary are that: Metric-driven verification is a must to maximize productivity and increase quality It drives the complete verification process from planning to verification closure Verification productivity increases and time-frames pull in, quality improves Easier project evaluation tracking and easy multi-site verification tracking Are We There Yet? (Please visit the site to view this video) Sign up for Sunday Brunch, the weekly Breakfast Bytes email

Viewing all articles
Browse latest Browse all 6660

Trending Articles