In yesterday’s Part I blog post , I talked about a technique for focusing code coverage efforts on actionable data—namely, focusing on higher level connectivity. Here, let’s discuss a second technique to support system-level code coverage with hardware-assisted verification. This technique involves performing deep analysis only in particular regions of the design. Figure 1: Apply code coverage with localized focus In today’s environment, SoCs are comprised of blocks with varying backgrounds. Some are being reused from prior generations, some come from third-party providers, and some are recently developed in-house. In Figure 1, you’ll see a generic SoC design with a CPU, some unique core functionality, and a bunch of peripherals. Let’s say the CPU is a third-party intellectual property (IP) block and most of the peripheral blocks are well-tested and are being re-used from previous projects. Then, you’ll have core function units that are new or significantly modified. Also, say there is a new, complex interconnect fabric for this design. To manage the magnitude of the coverage analysis, it can be useful to focus on these new and lesser tested areas of the design. Since these are new portions of the design, it is easy to get access to the designers to review coverage data. If you are focusing on specific instance hierarchies at different times in the verification process, it is still feasible to merge coverage from these different regions into a single complete view. So, in these two blog posts, we’ve briefly discussed a couple of techniques that can help you focus on actionable code coverage data in order to make more effective use of code coverage at the system level. However, many users actually do take on the task of generating and analyzing coverage deep into the design across the width of the various subsystems. They set very high code coverage goals, like 100%, and in any regions of the design that don’t meet those goals, they will get the responsible design engineers to review the holes in their modules. What is an effective technique for analyzing and improving coverage in this scenario? When you do deep analysis of this kind, you will invariably find coverage items that simply are not going to be used in the context of the design. For these situations, coverage analysis tools support the concept of exclusion lists. By excluding certain coverage items, they will not be included in the coverage score. Figure 2: Excluding irrelevant coverage items This little example shows a module that supports 8-, 16- and 32-bit accesses. If this particular application only uses 32-bit accesses, the 8- and 16-byte access blocks will be uncovered. The example shows that when you exclude the two uncovered blocks, the block coverage score for the module increases from 83% to 89%. Note these types of exclusions are generally stored and re-used from run to run, and from chip generation to chip generation. You only have to do the detailed analysis once. Functional Coverage: Another Technique in Your Verification Toolbox Code coverage is just one of the coverage techniques. The other is functional coverage. Together, these coverage types provide a more complete set of information to help answer some of our customers’ critical system-level questions. Figure 2 tabulates what we’ve heard from several users on their most commonly asked questions for which a combination of code and functional coverage can help answer. Figure 3: User explorations and coverage type The recent posts merely touched on the first two rows. If you want to hear more about the other use cases highlighted in Figure 3, check out this webinar, “Effective System-Level Coverage Use Cases for Functional Verification.” Also, read this Tech Design Forum blog post on focusing coverage for system-level integration . Here are some FAQs that you may find interesting: Where should I start? Should I focus on functional coverage or code coverage techniques? I can't do everything! You can consider starting with code coverage, incorporating it as an initial step in using coverage to improve verification. Code coverage has the benefit of being easy to make the first initial step; inclusion is automated via switches. The ultimate solution is to adopt a metric-driven verification methodology and incorporate functional coverage. It does require upfront effort to create a plan, define the “whats” in the plan, and to define the coverage items to answer the “hows”. We use many tools to help coordinate our system-level tests. It is very complicated. If I enable coverage, how do I coordinate all my data? There are management tools that do help manage a regression environment and all of the coverage data collected across that regression. It's fairly easy to pull data into the analysis environment at different stages. You can "mine" coverage data from your existing regression running environment. Once you have coverage data, you can combine and filter data in many dimensions. This can certainly help with the complexity. How common is coverage analysis in hardware-assisted verification? Four or five years ago, coverage usage with hardware-assisted verification was limited to hard-core verification environments experimenting in the area. As technology and usability has improved, mainstream users are adopting it within their flows. The need to expose your design to more scenarios that hardware-assisted verification can enable is increasing – these scenarios are otherwise difficult to model in a pure testbench-driven software simulation environment where coverage and metric-driven verification have been traditionally prominent. Raj MathurImage may be NSFW.
Clik here to view.
Clik here to view.
