Historically, one of the great challenges that analog and mixed-designers face has been accounting for the effect of process variation on their design. Minimizing the effect of process variation is an important consideration because it directly impacts the cost of a design. From Pelgrom’s Law (1), it is understood that the device mismatch due to process variation decreases as the square root of increasing device area, see note 1. For example, to reduce the standard deviation, sigma, of the offset voltage from 6mV to 3mV, means that the transistors need to be four times larger. By increasing transistor size, the die cost is also increased since die cost is proportional to die (and transistor) area. In addition to increasing cost, increasing device area may degrade performance due to the increased device parasitic capacitances and resistances of larger devices. Or the power dissipation may need to increase to maintain the performance due to the larger parasitic capacitances of the larger devices. In order to optimize a product for an application, that is, for it to meet the target cost with sufficient performance, analog and mixed-signal designers need tools to help them analyze the effect of process variation on their design. Another way to look at the issue is to remember that analog circuits haven’t scaled down as quickly as digital circuits, that is, to maintain the same level of performance has historically required something like the roughly the same die area from process generation to process generation. So, while the density of digital circuitry doubles every eighteen months, analog circuits don’t scale at the same rate. If an ADC requires 20% of the die area at 180nm, then after two process generations at the 90nm process node the die area of the ADC and digital area are equivalent. After two more process generations at 45nm, the ADC requires 4x the area of the digital blocks, see note 2. The example that has been presented is exaggerated, however, the basic concept that process variation is an important design consideration for analog design is valid. Traditionally, the main focus of block-level design has been on parasitic closure, that is, verifying the circuit meet specification after layout is complete and parasitic devices from the layout have been accounted for in simulation. This focus on parasitic closure meant that there was only limited supported for analyzing the effect of process variation on design. During the design phase, sensitivity analysis allowed a designer to quantitatively analyze the effect of process parameters on performance. During verification, designers have used corner analysis or Monte Carlo analysis to verify performance across the expected device variation, environmental, and operating conditions. In the past, these analysis tools were sufficient because an experienced designer already understood their circuit architecture, its capabilities, and its limitations. So performance specifications could be achieved by overdesigning the circuit. However, ever decreasing feature size have increased the effect of process variation and market requirements meaning designers have less margin to use for guard banding their design. Also, the decreasing feature size means that power supply voltages are scaling down and in some cases circuit architectures need to change. An example of how power supply voltage effects circuit architecture is ADC design, where there has been a movement from pipeline ADC designs at legacy nodes, 180nm, to successive approximation ADC, SARADC, for advanced node, 45nm, designs. This change has occurred because a SARADC can operate at lower power supply voltages than pipeline ADCs. As a result of the changing requirements placed on designers, there is a need for better support for design analysis than ever before. Let’s look at an example of statistical analysis often performed by analog designers. Shown below is the Signal to Noise and Distortion Ratio, SNDR or SINAD, value of a Capacitor D/A Converter, CAPDAC. A CAPDAC is used in a successive approximation ADC to generate the reference voltage levels used to compare the input voltage to in order to determine the digital output code. The SINAD of the CAPDAC determines the overall ADC accuracy. Figure 1:Example of Monte Carlo Analysis Results for Capacitor D/A Converter Signal-to-Noise Ratio On the left is the distribution of the capacitance variation and one the right is the CAPDAC Signal-to-Noise Ratio, SNR, distribution. From the SNR distribution, the mean and standard deviation of the CAPDAC SNR can be calculated. If the specification of the SNR must be greater than 60dB, does this result mean that the yield will be 100%? Another question to consider is whether or not distribution for the SNR is Gaussian or not since the analysis of the results is impacted by the type of distribution. Or we might want to quantify the process capability, C pk . C pk is a parameter used in statistical quality to control to understand how much margin the design has. In the past, this type of detailed statistical analyses has not been available in the design environment. In order to perform statistical analysis, designers needed to export the data and perform the analysis with tools such as Microsoft Excel. Beginning in IC67, Cadence ® Virtuoso ® ADE explorer was released with features to support a designer’s need for statistical analysis. Just a note, for detailed technical information, you can explore the Cadence Online Support website or contact your Virtuoso front-end AE. Now let’s take a quick look at enhancements to Monte Carlo analysis starting with the methods used to generate the samples for. In Monte Carlo analysis, the values of statistical variables are perturbed based on the distributions defined in the transistor model. The method of selecting the sample points determines how quickly the results converge statistically. Let’s start with a quick review, in the CAPDAC example we ran 200 simulations and all of them passed. Does that mean that the yield is 100%? The answer is no, it means that for the sample set used for the Monte Carlo analysis, the yield is 100%. In order to know what the manufacturing yield will be, we need to define a target yield, for example, let target yield greater than 3 standard deviations, or 99.73%, and define a level of confidence in the result of 95%. Then we can use a statistical tool called the Clopper-Pearson method to determine if Monte Carlo results have a >95% chance of having a yield of 99.73%. The Clopper-Pearson method produces an interval of confidence, the minimum and maximum possible yield, given the current yield, number of Monte Carlo iterations, etc. Often designers perform a number of simulations: 50, 100, etc. based on experience and assume that the results would predict the actual yield in production. By checking the confidence interval, we can reduce the risk of missing a yield issue. Another result of using the rigorous approach to statistical analysis, is that more iterations of Monte Carlo analysis are required. As a result, designers need better sampling methods that reduce the number of samples, Monte Carlo simulation iterations, required in order to trust the results. Random sampling is the reference method for Monte Carlo sampling since it replicates the actual physical processes that cause variation; however, random sampling is also inefficient requiring many iterations, simulations, to converge. New sampling methods have been developed to improve the efficiency of Monte Carlo analysis by more uniformly selecting sample points. Shown in Figure 2, is a comparison of samples selected for two random variables, for example, n-channel mobility and gate oxide thickness. The plots show the samples generated by random sampling and a new sampling algorithm called Low Discrepancy Sampling or LDS. Looking at the sample points, it is clear that LDS has more uniformity spaced sample points. More uniformly spaced sample points mean that the sample space has been more thoroughly explored and as a result the statistical results converge more quickly. This translates into fewer samples being required to correctly estimate the statistical results: yield, mean value, and standard deviation. Figure 2: Comparison of Random Variable values using Random Sampling and LDS Sampling The LDS sampling method replaces Latin Hypercube sampling because it is as efficient and supports Monte Carlo auto-stop. Monte Carlo auto-stop is an enhancement to Monte Carlo that optimizes simulation time. Statistical testing is used to determine if the design meets some test criterion, for example, for the CAPDAC, assume that you want to know with a 90% level of confidence that the SNR yield is greater than 99.73%. The user needs to define these criteria at the start of the Monte Carlo analysis and the results are checked after every iteration of the Monte Carlo analysis. The analysis stops if one of two conditions occurs. First, the analysis will stop if the minimum yield from the Clopper-Pearson method is greater than the target criteria, that is, the SNR yield is greater than 99.73%. More importantly, the Monte Carlo analysis will also stop if Virtuoso ADE Explorer finds that the maximum yield from the Clopper-Pearson method will not exceed 99.73%. Since failing this test means that the design has an issue that needs to be fixed, this result is also important. It also turns out that failure usually occurs quickly, after a few iterations of the simulation. As a result, using statistical targets to automatically stop Monte Carlo can significantly reduce the simulation time. In practice, what does this look like? Consider the following plot in Figure 3 which shows the upper bound, the maximum yield, and lower bounds, minimum yield, and the estimated yield of the CAPDAC as a function of the iteration number. The green line is the lower bound of the confidence interval assuming the user would like to represent the estimated yield By the 300 th iteration, we know that the yield is greater than 99% with a confidence level of 90%. Or we can be very confident that the CAPDAC yield will be high. In addition, thanks to Monte Carlo auto-stop we only needed to run the analysis once. Figure 3: Yield Analysis Plot To summarize, the two improvements to Monte Carlo sampling are LDS sampling and Monte Carlo auto-stop. LDS sampling uses a new algorithm to more effectively select the sampling points for Monte Carlo analysis. Monte Carlo auto stop uses the statistical targets: yield and confidence level, to determine when to stop the Monte Carlo analysis. As a result of these two new technologies, the amount of time required for Monte Carlo analysis can be significantly reduced. In the next article, we will look into analyzing Monte Carlo analysis results to better understand our design and how to improve it. Note 1: Remember in analog design, designers rely on good matching to achieve high accuracy in their designs. Designers can start with a resistor whose absolute accuracy may vary +/-10% and taking advantage of the good relative accuracy, matching between adjacent resistors, to achieve highly accurate analog designs. For example, the matching between adjacent resistors may be as good as 0.1%, allowing design data converters of 10 bit, 1000parts per million (ppm), 12 Bit, 0.00025ppm, or even 14 Bit, 00001ppm, accuracy circuits. Note 2: In reality, only the components in the design sensitive to process variation do not scale, so the area of the digital blocks will scale and the area of some of the analog blocks may scale. The solution designers typically adopt to maintain scaling, is to implement new technologies, such as, digitally assisted analog (DAA) design to compensate for process variations. While adopting DAA may enable better scaling of the design, it also increases schedule risk and verification complexity. References: 1) M.J.M. Pelgrom, A.C.J. Duinmaijer, and A.P.G. Welbers, “Matching properties of MOS transistors,” IEEE Journal of Solid-State Circuits, vol. 24, pp. 1433-1439, October 1989. 2) See Clopper-Pearson interval http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval
↧