Quantcast
Channel: Cadence Blogs
Viewing all articles
Browse latest Browse all 6690

Software-Driven Hardware Verification

$
0
0
There is a transition going on in the system and semiconductor worlds. Some system companies are starting to do their own semiconductor design. In fact, it is notable that all the leading smartphone vendors design their own application processors, for example. At the same time, semiconductor companies have to create a large part of the software stack for each SoC since the software and silicon are intimately related. Both these trends mean that software and SoCs need to be designed in parallel. This is not the only change going on. There is more parallelism in everything from the inter-relation of thermal and power, packaging and EMIR analysis, system architecture and test strategy, and more. This highly concurrent design process is what we at Cadence call System Design Enablement or SDE. For example, an SoC intended for a smartphone has to run Android (with one obvious big exception). It doesn’t matter whether it is a smartphone company that is designing its own chips or an SoC company that sells standard products to other manufacturers. The requirements are very similar in either case, and Android simply has to run on the chip. No company designing such a chip is going to tape out the design without first running Android on a model of the chip. This is not just to ensure that the software runs. Other major characteristics such as the effectiveness of the SoC power architecture or the thermal effects in different modes (making a call, playing a game, listening to an mp3) need to be measured too. This is software-driven hardware verification. But it is not specific to the smartphone example. Chips for automotive and chips for vision and IoT devices such as wearables all have a large software stack and the most basic function of the SoC, before anything else needs to be considered, is “run the software." Verification is always requires a multi-faceted approach. Software-driven hardware verification, in fact, can only be used relatively late in the design cycle when enough of the design has been completed for the software to run. Earlier, at the block level, verification can be done with simulation and verification IP (VIP), or with formal techniques, or even with FPGA prototyping. But ultimately, when the design is approaching tapeout and most of the blocks exist, then the software needs to be run. However, there is a major challenge. Booting Android, let alone running any application software once it is booted, requires billions of vectors. The SoC on which the software has to run may itself consist of billions of gates. But it has to be done. The cost in terms of both schedule and dollars is far too great to risk taping out an SoC where all the blocks have been verified but the ultimate system verification of running the software has not been done. There are two key technologies for software-driven hardware verification. The first key technology is emulation. Emulators are expensive—there is no escaping it. However, they also provide the cheapest vector you can buy—you just have to require a lot of them. And to do software-driven hardware verification, you do need a lot of emulators. An analogy is that it is a lot cheaper by the kilo is to buy a container-load of pork bellies on the commodity market than a packet of bacon at the supermarket. You just need to want a lot of bacon for it to make sense. Over the years, we saw that emulation could sometimes be the weak link because of the lack of flexibility and the difficulty of getting a design into the system. Ten years ago, this could have taken literally months, but now the landscape is different. Emulation tools can now accept anything that RTL simulation can accept and compile it extremely fast. The second key technology for software-based hardware verification is virtual platform technology. This allows code written for one microprocessor to be run on a normal architecture core. The binary of the software load runs on a “model” of the processor. The word, “model,” is in quotes because under the hood, the microprocessor instructions are compiled on-the-fly into x86 instructions the first time they are encountered. This so-called just-in-time (JIT) compilation is very similar to the way Java interpreters work. In fact, the compilation process sometimes only takes place the second time an instruction sequence is seen since so much code during system boot is executed only once and doesn’t justify the cost of compilation versus simply interpreting the instructions. The reason for using virtual platforms is that they are much faster and simpler than running a full RTL model of the processor. Even if an RTL model of the microprocessor is available it is not the best approach to use, even with the enormous throughput of an emulator. These two approaches work hand in hand in what is sometimes called hybrid verification. The code binary (Android, let’s say) runs on the virtual platform, and the rest of the design can be compiled from RTL and then run on the emulation platform. The two parts are automatically linked together so that when the code for a device driver, let’s say, accesses its corresponding device, then the RTL in the emulation platform sees the vectors. The software development team for an SoC faces a similar problem—checking that the software they write runs on the SoC before they have an SoC to run it on. Their problem is a little less severe because it is a lot easier to change software after an SoC is delivered than it is to change the SoC itself. Software-based verification works for the development teams too because it gives them a platform on which to test that their code that runs fast enough to be workable for them.

Viewing all articles
Browse latest Browse all 6690

Trending Articles