Key Findings: Nearly 100% of SoCs are mixed-signal to some extent. Every one of these could benefit from the use of a metrics-driven unified verification methodology for mixed-signal (MD-UVM-MS), but the modeling step is the biggest hurdle to overcome. Without the magical models, the process breaks down for lack of performance, or holes in the chip verification. In the last installment of The Low Road , we were at the mixed-signal verification party. While no one talked about it, we all saw it: The party was raging and everyone was having a great time, but they were all dancing around that big elephant right in the middle of the room. For mixed-signal verification, that elephant is named Modeling. To get to a fully verified SoC, the analog portions of the design have to run orders of magnitude faster than the speediest SPICE engine available. That means an abstraction of the behavior must be created. It puts a lot of people off when you tell them they have to do something extra to get done with something sooner. Guess what, it couldn’t be more true. If you want to keep dancing around like the elephant isn’t there, then enjoy your day. If you want to see about clearing the pachyderm from the dance floor, you’ll want to read on a little more…. Figure 1: The elephant in the room: who’s going to create the model? Whose job is it? Modeling analog/mixed-signal behavior for use in SoC verification seems like the ultimate hot potato. The analog team that creates the IP blocks says it doesn't have the expertise in digital verification to create a high-performance model. The digital designers say they don’t understand anything but ones and zeroes. The verification team, usually digitally-centric by background, are stuck in the middle (and have historically said “I just use the collateral from the design teams to do my job; I don’t create it”). If there is an SoC verification team, then ensuring that the entire chip is verified ultimately rests upon their shoulders, whether or not they get all of the models they need from the various design teams for the project. That means that if a chip does not work because of a modeling error, it ought to point back to the verification team. If not, is it just a “systemic error” not accounted for in the methodology? That seems like a bad answer. That all makes the most valuable guy in the room the engineer, whose knowledge spans the three worlds of analog, digital, and verification. There are a growing number of “mixed-signal verification engineers” found on SoC verification teams. Having a specialist appears to be the best approach to getting the job done, and done right. So, my vote is for the verification team to step up and incorporate the expertise required to do a complete job of SoC verification, analog included. (I know my popularity probably did not soar with the attendees of DVCON with that statement, but the job has to get done). It’s a game of trade-offs The difference in computations required for continuous time versus discrete time behavior is orders of magnitude (as seen in Figure 2 below). The essential detail versus runtime tradeoff is a key enabler of verification techniques like software-driven testbenches. Abstraction is a lossy process, so care must be taken to fully understand the loss and test those elements in the appropriate domain (continuous time, frequency, etc.). Figure 2: Modeling is required for performance AFE for instance The traditional separation of baseband and analog front-end (AFE) chips has shifted for the past several years. Advances in process technology, analog-to-digital converters, and the desire for cost reduction have driven both a re-architecting and re-partitioning of the long-standing baseband/AFE solution. By moving more digital processing to the AFE, lower cost architectures can be created, as well as reducing those 130 or so PCB traces between the chips. There is lots of good scholarly work from a few years back on this subject, such as Digital Compensation of Dynamic Acquisition Errors at the Front-End of ADCS and Digital Compensation for Analog Front-Ends: A New Approach to Wireless Transceiver Design . Figure 3: AFE evolution from first reference (Parastoo) The digital calibration and compensation can be achieved by the introduction of a programmable solution. This is in fact the most popular approach amongst the mobile crowd today. By using a microcontroller, the software algorithms become adaptable to process-related issues and modifications to protocol standards. However, for the SoC verification team, their job just got a whole lot harder. To determine if the interplay of the digital control and the analog function is working correctly, the software algorithms must be simulated on the combination of the two. That is, here is a classic case of inseparable mixed-signal verification. So, what needs to be in the model is the big question. And the answer is, a lot. For this example, the main sources of dynamic error at the front-end of ADCs are critical for the non-linear digital filtering that is highly frequency dependent. The correction scheme must be verified to show that the nonlinearities are cancelled across the entire bandwidth of the ADC. This all means lots of simulation. It means that the right level of detail must be retained to ensure the integrity of the verification process. This means that domain experience must be added to the list of expertise of that mixed-signal verification engineer. Back to the pachyderm There is a lot more to say on this subject, and lots will be said in future posts. The important starting point is the recognition that the potential flaw in the system needs to be examined. It needs to be examined by a specialist. Maybe a second opinion from the application domain is needed too. So, put that cute little elephant on your desk as a reminder that the beast can be tamed. Steve Carlson Related stories - It’s Late, But the Party is Just Getting Started
↧