Quantcast
Channel: Cadence Blogs
Viewing all articles
Browse latest Browse all 6700

Jasper User Group 2018

$
0
0
This week it is CDNLive Israel. But last week it was Jasper User Group (JUG). At it happens, Jasper was one of the early companies to sign up with SemiWiki when we started it, so I've been going to Jasper User Group for longer than either I've been at Cadence or Jasper has. As usual, this year's meeting was held in the building 10 auditorium on the Cadence campus. It has been interesting to watch formal verification evolve from an esoteric backwater that required a PhD to drive it, to a mainstream part of the verification arsenal. Twenty years ago at any conference where people were presenting papers about formal verification, it would have been very academic, and all about algorithms for theorem-proving. Nobody talked about using formal on real designs because nobody tried to use it on anything beyond toy designs. Last weeks JUG was an annual reminder on just how far formal has come. Pete Hardee, opening the meeting, said it is the biggest gathering of formal experts every year. These are experts at using formal on real designs. If there were some themes that ran through the presentations, I would say they are moving formal from pure functional verification of RTL being used in an SoC (or for IP) towards: Using formal for post-silicon debug. Moving from simulation signoff to formal signoff. Formal and functional safety. Ziyad Hanna The day kicked of with Ziyad Hanna giving a technical update on what was in the releases of JasperGold since JUG last year. He would actually be back at the end of the day, and yet again at the end of the second day, to give the future roadmaps of what is e expected in the next year or so. But this is one of those occasions when, as they say, "you had to be there." I'm not allowed to put those on Breakfast Bytes, which is completely unrestricted as to who can read it. Besides, if you are a serious user of JasperGold then you really should be making the investment to spend two days of the year at the meeting. In the past, Ziyad said, a lot of the challenges to the adoption. Some are technical, but some are more social. They are: Scalability Resource utilization (uesrs' expertise, machine requirements for CPU, memory, and storage) Proof power (effective engine and technology selection) Effective engine/technology selection (proof power, model checking efficiency) Inability to leverage repetitive analysis (reuse/learn from previous runs) Lack of complete, accurate, and understandable coverage JasperGold 2018.09 (the latest release) solves many of these issues. Ziyad had some benchmarks showing that compilation performance had increased by 3.5X and only required about a third of the memory (compared to 2016.12). Debug has improved by 4-8X on large traces and is 10X faster on signal browsing. As an example showing improvements, above is a customer design reset analysis. You can see the improvement from the start of 2018 each month until October (basically until JUG). The overall improvement is nearly 6X. Overall, 2018.09 has improved core engine performance by 10X ( measured in properties per second), and core engine proof success has improved with a 37% reduction in undetermined properties. Some of this improvement is due to machine learning (ML). The current machine learning, ProofMaster, has been trained on a set of 500 designs running for 4 hours each, with 1200 maximum trace length. That allows inferring the best solver, which is then used as the default for that sort of design. In the future, the plan will be to allow customers to train the ML on their own designs, which are presumably more like their current designs than the set that Cadence has in-house. And talking of solvers, 2018.09 has a new solver engine B4 too, which has a big improvement over just B in many cases. The result is great success for JasperGold. As Ziyad said wrapping up: Yes, I know there are no numbers on the Y-axies. But the number of customers has doubled since JUG last year. Vigyan Singhal Vigyan is the CEO of Oski, but is also the founder of Jasper. He gave a talk about deadlock and formal verification. Deadlock is when the system locks up, usually because one part of the system is waiting for another part to do something, but that part is waiting for the first part. One challenge is that typically all the blocks in the design verify correctly, deadlock is a system problem. He had an example of a memory interface with a FIFO where the read responses (data) were mixed with the write responses (acknowledgment) in the same FIFO. The read data was only taken out of the FIFO once the last piece of data had arrived (so the transfer was complete, and all the data could be taken and processed). However, under certain circumstances, the FIFO would contain too many write responses and, through backpressure, the last word could not get in. The system deadlocked. This was a perfect example, since both blocks seemed correct, but there were unspoken assumptions between the designers of the two blocks. One reason for using formal verification for deadlock hunting is that it is extremely hard to do with simulation. Unless you are very lucky, you won't hit on the right combination of weird circumstances that cause the problem. But "right combination of weird circumstances" is the bread and butter of formal verification. This was summarized nicely later in the day by Broadcom's Khurram Ghani who contrasted simulation and formal: Simulation: “it is easy to show that the basic packets can flow, but hard to prove the corner cases” Formal: “it is easy to prove the corner cases, but hard to show that a single packet can flow” But formal is not a panacea since it is hard to write properties for deadlock. The saving grace, though, is that things are stuck. There are all sorts of reasons, but each state-machine is stuck individually. So that can be used for deadlock bug hunting. As regards signoff, Vigyan emphasized that you can't find deadlocks, and increase your confidence that there are none lurking, but you can't formally prove that your design is deadlock-free. Elchanan Rappaport of Veriest One other presentation that I'll cover briefly here. Elchanan discussed a problem that almost everyone faces: to get formal to "work" it is often necessary to reduce the design to something simpler. His talk was titled A Methodology for Evaluating the Risk of Reductions . For example, if your design contains a 256-deep FIFO, then formal is not going to be able to follow data through. Let's assume the FIFO is implemented with registers rather than a memory and pointers. Then almost certainly except for the first and last couple of registers, the ones in-between are all the same. So reducing the FIFO to 6 elements and dropping the other 250 has no effect, in the sense that if there is a problem in the big FIFO then it will show up in the reduced one. Or more importantly, if the reduced one is correct, then the big one is too. The problem is that you'd really like to prove that, not just rely on hand-waving arguments about how knocking out all those registers in the middle cannot change anything. Similar arguments justify reducing the size of counters (so that they count down fast enough for formal to analyze), and other similar reductions. There are also property reductions, where the sequence length is limited. But in all cases, today, it is the designer's expertise that is in play, the formal tool is not proving that the reductions are safe. Elchanan had some Talmudic insight on how once formal engineers considered designs sacrosanct and would never change them, but now they start using reductions before they even start testing. The Talmud apparently says "The second time a man commits a transgression, he already sees it as permissible." I guess that is the Talmudic equivalent of a slippery slope. How can a failing scenario give a pass result after reductions are applied? Since formal tests all possible scenarios, it must be that we made the failing scenario not possible. Elchanan's approach to this is to run reachability coverage on both the full and the reduced models and make sure that nothing became unreachable as a result of the reductions. Of course, this is not what reachability coverage is usually used for, and there may be other reasons for unreachability as shown in the above diagram. But it seems the process works. When applied on a customer design, he got the results in the table below. So using this approach, it is possible to make reductions and then, instead of crossing your fingers, prove that they don't mask problems, so that it really is true that if the reduced design passes, the full design would have passed too. Coming Soon In the next couple of weeks, watch for two more posts from Jasper User Group One on using formal for post-silicon debug. This will combine the presentations from Laurent Arditi of Arm, In Case of Emergency Call 1-800-FORMAL and from Jim Kasak of HP Enterprise, Accelerating Post-Silicion Debug with Formal Verification . As further encouragement, Laurent won this year's best presentation award, and Jim has won the best presentation award 3 times in past JUGs. The second on formal signoff. This will combine the presentations from Khurram Ghani of Broadcom, Replacing Simulation with Formal for Block-Level Signoff: A Case Study , and from Mirella Negro Marcigaglia of ST Microelectronics, Static and Dynamic Verification Interoperability in Digital IP Verification . Sign up for Sunday Brunch, the weekly Breakfast Bytes email.

Viewing all articles
Browse latest Browse all 6700

Trending Articles