This year's Jasper User Group (JUG) took place on 7th November. It was the 10th JUG, and the 4th since Jasper was acquired by Cadence. I consulted for Jasper before I worked for Cadence, and so I took the opportunity to turn up (straight from CDNLive Israel the day before) in my collectors' item Jasper User Group fleece with the old Jasper logo. Oz Levia, opening the conference, said it was the world's biggest gathering of commercial formal verification users. Cadence also has the world's largest formal R&D team (with large teams in Brazil and Israel. Do you speak Portuguese and Hebrew?). There were ten customer case studies from Analog Devices: two from Arm; Cadence's IP group; HPE; NVIDIA; Samsung; two from TI; and Trend Micro. Plus two invited speakers, a keynote by MV Achutha Kiran Kumar (a co-author of the book—we had Erik Seligman, another co-author, keynote last year) and Vigyan Singhal, the CEO of Oski Technology. Last year was a strong year for formal verification, both as a general area of verification and for Cadence in particular. Formal is a mainstream part of verification now, not just something that is only understood by the one weird PhD in the office down the hall. This is through a mixture of improved capabilities in JasperGold, improved methodologies, and hard work by pioneers of formal approaches establishing the ROI and importance of formal with their management. Indeed, both of the invited speakers spent a good part of their time on how to present formal to management and design groups so that adoption occurs. I will write about the two invited presentations in a separate post, and I will also push the CCIX presentation from IPG into a presentation about CCIX in general along with the TSMC/Arm/Xilinx/Cadence demonstrator. Here I'll give an overview of the two best papers of the day. These were voted by the audience and so this is not my opinion (nor stitched up in the traditional "smoke-filled room" by Pete Hardee and Oz). Aruba: The Valley of Death The runner-up was Jim Kasak (he is always at JUG but the name of his company keeps changing: this time it is Aruba, or Aruba-a-Hewlett-Packard-Enterprise-Company as it seems to be compulsory to say). He presented on Divide and Conquer the Valley of Death . The valley of death is when you formal is not converging on a proof, but you are not finding counter-examples (bugs) either. Management wants to know what is being done that will be useful, and when. Plus everyone gets demoralized and embarrassed at their inability to prove a "simple" block correct. The "Divide and Conquer" of the title is actually three techniques to take a block and make it simpler to verify formally: Partition the original DUT into several smaller DUTs Replace complex modules with complex state with: a model of the block that is much simpler, prove that the model works, and then separately prove that the new model is a superset of the original RTL Mutate the design by things like reducing the depth of FIFOs He showed these three approaches using a multi-port memory. You want to prove that it works, which means that you want to guarantee that the responses for each port emerge in the correct order. Doing this naively resulted in a proof that had not converged after several days. The partition approach was used first. It is too complex for me to cover the details here, but one result was splitting up each port (request and reply) into a separate module for verification, and splitting up the request (ingress in his terminology), reply (egress) and the middle part. The third mutate approach was used on the ingress and egress FIFOs. The little table above shows the importance. With a FIFO depth of 2, it takes 4 seconds per bit to get convergence. With a FIFO depth of 12 it takes nearly 6 hours per bit. Of course, you need to be sure it is safe to reduce the FIFO depth in this way, which can be hard to determine, since it depends on the details of the implementation. Well, I've used approach number two here, and abstracted Jim's talk into a few paragraphs and left out all the detail. Unfortunately, I can't prove this is equivalent, you should have been there in person. But here's a consolation prize: Jim said that much of the motivation for what he had done came from Juniper's Anamay Sullery's DVCON paper Design Guidelines for Formal Verification which he recommends everyone to read ( paper , slides ). Arm: Verification by the Book The winner of the best paper award was Will Keen of Arm, who presented on Formal Verification by the Book: ISA Formal at Arm . This seems to be a continuation of work that Daryl Stewart discussed at last years JUG, that you can read about in What is the ARM ARM? With the change in Arm's branding, it's now the Arm ARM, the Arm Architecture Reference Manual. Daryl talked about making the ARM executable, since the pseudo-code that was in the old ARM was full of trivial errors. Will reported this year on where the basic idea of verifying the processor against the ARM automatically has reached. The first step has been to further develop a tool called ArmEx, the Arm Architectural Explorer, which started as an interpreter to execute the pseudocode in the ARM as part of the earlier project. It has since been expanded to write out each instruction in SystemVerilog which maps the state of the processor before the instruction to the state afterwards. This offers up a methodology to verify the CPU RTL against the ARM directly, as shown in the above diagram. One big gain from this approach is that the ISA formal is based on the committed state of the architecture, and ignores all the speculative operations that are going on in the implementation for performance reasons. For example, if instructions are executed speculatively, and then a branch is not taken and the speculative results are discarded, then none of this needs to be verified (or even seen) unless there is a bug and the speculative state gets committed incorrectly. No matter how complex the implementation gets, the architecture is comparatively simple and unchanging. If you add register B to register A, then the processor may contain all sorts of complexities due to multiple issue, instruction merging, branch prediction, pipelines, out-of-order execution, and probably more. But if register A doesn't end up containing the sum of the two registers, then there is a counterexample. The process has been further automated, automatically generating VIP straight from the ARM. The dream is to automatically verify the processor against all 5,740 pages of the ARM using formal. This project has been successful enough that it is being deployed to all CPU projects inside Arm, and has been or is being used on the cores above. It has already been used on small cores (the M33) up to large cores (the A75) and all sizes in between. It is easy to forget how broad Arm's product line is today compared to when it was pretty much just the ARM7TDMI (see The Design that Made ARM ). But the ISA is only one part of a processor. There is potential to take the same model and apply it to other aspects of the design such as the memory model, the MMU, and generally encouraging/mandating that specifications of functionality need to have an executable spec against which the implementation can be verified. Sign up for Sunday Brunch, the weekly Breakfast Bytes email.
↧