This year's Jasper User Group conference finished up with a dose of realism in a panel session focused on the practicalities of bringing formal techniques into large companies with established flows. What do you tell your boss? How do you get skeptical design engineers to learn? What is the easy-to-adopt low-hanging fruit? In Oz's words from the first day of the conference, how do you democratize formal and automatic verification? I’m going to summarize the discussion and not try and attribute every remark to individual panelists, especially since many of them made similar points. But the panelists were: Ashish Darbari of Imagination Ali Habibi of Qualcomm Stuart Hoad of PMC-Sierra Normando Monticello of Broadcom Lawrence Loh of Cadence One of the challenges was getting started with formal. Management won't put significant resources onto something that hasn't been tried before, since they have to pull them off other stuff (usually simulation). On the other hand, you can't get the results needed to justify the investment without some manpower. Realistically, despite formal verification having been around for at least 20 years, it still requires a certain amount of missionary zeal. Even in a large company like Qualcomm or Broadcom, there are only one or two of those missionaries who understand the benefits and are chartered to lead the company to the promised land. Qualcomm, for example, reckons that simulation is still 99% of verification. There is some formal and some emulation, but the backbone is about 100,000 machines used for simulation. Even when a block is formally verified, they still need simulation. Not so much that they don't trust the formal tools but their engineers are not formal experts and make mistakes. There seemed to be three areas where adopting formal was especially attractive: The simplest stuff like connectivity and register access. This is not rocket science and doesn't require deep formal expertise. Blocks that cannot be verified by simulation. But the problem here is that blocks that can't be verified by simulation are probably not easy to verify formally either. It requires expert knowledge. Deadlock is especially attractive since if it eludes simulation and escapes into the wild it can be a double-digit-million-dollar problem. The corner case stuff where formal approaches do so well and simulation does not. It gets attention when a block that has been signed-off from a simulation point of view is suddenly proved to contain a bug that was missed. When JasperGold found a bug in the ARM cache-coherency logic a couple of years ago (before it shipped, I should say), people noticed. Everyone thought that justifying the return on investment was hard. It is hard to show value and important to predict the right battles. Some of the easiest, almost trivial, formal verification such as connectivity turns out to be the easiest to use to justify the investment. It is hard to convince management that formal is not taking people away from simulation. It is a steep ramp and the skill-set doesn't always match what is really required. Most people in the audience already love formal, but when management asks how many resources they will save, it is hard to answer. Better quality and shorter time are nice, but that is not completely compelling. I have the concept of "predictable pain", that engineering management knows that what they are doing is not optimal but they know how much pain it is causing, say an extra five engineers and a month. It is very hard to sell against predictable pain (and justifying the investment internally is basically selling). With formal, it is especially hard since formal tends to be unpredictable. So management may see this as taking away predictable pain and replacing it with the possibility of unpredictable pain. The three areas in the list above don't suffer from this. The simple stuff is often expensive to verify with simulation so the ROI is clearer. The blocks that cannot be verified with simulation are already unpredictable pain. And errors missed by simulation show that what looked like predictable pain was anything but. Throwing some interns at the problem seems attractive since they often will find bugs in weeks that simulation has been missing for years. Eventually the best approach seems to be to have a dedicated formal team. This solves two problems. They are not considered to be taking resources from simulation since they were never there in the first place, plus they are (or become) experts who can handle the less straightforward blocks. But it can take years to get there from a standing start. Early victories with novice engineers need to be followed by gradually building up a formal capability and engineers who start to specialize. This way you end up with, perhaps, half-a-dozen true experts in the company that may have 10,000 engineers and another 200 engineers who use formal but only for a few weeks each year. Everyone wants to be able to go to their boss and say that you can take 100 simulation engineers and replace them with 50 formal engineers but it just doesn't work like that. Where formal does seem to shine is in finding high-quality bugs very early or late. Early means a design of higher quality that requires less simulation and is easier to debug. Late means finding bugs that would take months or never get found. Formal has a big part to play going forward since the simulation convergence metrics are slipping behind without it. In some ways, formal is a way to accelerate simulation and use it more effectively. Probably nobody in the audience, and certainly not on the panel, could imagine not using formal. But this is the Jasper User Group, so not exactly a representative sample of the world's design engineers. The challenge is not to preach to the converted but to democratize the approaches that the converted know can be so powerful.
↧