At CDNLive in Taiwan last year, Mediatek presented their experiences with Perspec System Verifier. This post assumes that you know the basics about Perspec. If you need an introduction then look at my earlier posts: A Perspective on Perspec Perspec Modeling Using Perspec The basic flow of Perspex follows five steps: Modeling: The device needs to be modeled at a very high level, so that Perspec can reason about use of resources Scenarios: The high-level scenarios for testing need to be created (such as make a phone call, or take a photo with the camera) Test generation: Actual tests are generated from these scenarios, consisting of SystemVerilog testbenches (for hardware) and C/C++ code (for exercising software) Run: These tests can then be run on various platforms such as simulation, emulation, FPGA prototyping, or virtual platforms Debug: The results of the test can be analyzed to determine performance, bottlenecks, track down the root cause of errors, and so on Mediatek's Experience Mediatek used Perspec with the goal of improving verification efficiency on a CPU subsystem. They wanted to increase the number of mixed and random C test cases that they could produce per day. They wanted to reduce debugging time. And they wanted to increase coverage. These are actually very general goals that almost any verification team is aiming for, too. The above diagram shows how Mediatek split the flow between the test modeling engineer and the test writer. One important aspect of Perspec is that it can also add randomization so that, for example, tests are repeated many times, on different cores and with the memory caches in different states. That way issues are not hidden by the limited amount of testing, or by the determinism present in the test would not be present in real hardware. Results The first set of results (above) show the increased efficiency of using Perspec compared with manual directed tests. The biggest differences are functional coverage (up from 2.5% to 100%) and the number of test cases per hour (up from 1 to 100). There is some up-front cost: it took 10 days to set up the Perspec environment versus 5 for the manual environment. Part of the reason for the increased coverage is simply that Perspec generates much more complex test cases that exercise more of the design, up from 0.5ms of simulated time to 50ms, and increase of 100 times. In a typical design, there are many issues that will not be discoverable if the maximum test case only runs for 0.5ms. The second set of results show the power of Perspec for doing hardware/software co-verification. Software/hardware interface errors, which Perspec found in 10 minutes, took anywhere from hours to much more than weeks to find manually. One of the results that people generally seem to discover with Perspec is not that they can do what they would normally do a little bit faster, although that is true. It is that it makes it possible to do tests that are so extensive and complex that you would never attempt to create them by hand. Mediatek summarized their experience in a few bullet points on their final slide: More details are on the Perspec System Verifier page . Previous: IEDM: Coventor Panel on BEOL Challenges
↧