Quantcast
Channel: Cadence Blogs
Viewing all articles
Browse latest Browse all 6660

AMI and IBIS: Who Put the Eye in AMI

$
0
0
Have you heard of IBIS and AMI? If you are French, you know that one is a chain of cheap hotels, and the other is the word for a friend. But if you have anything to do with SerDes design, then you know that they are the way you model the SerDes channel, making sure that the signal successfully makes it from one chip to the other with an acceptable performance. However, there is a big change coming that is going to mean a lot more people need to know what these two standards are. The DDR5 standard will be specced to include DFE equalization (DFE stands for Decision Feedback Equalization). This will require modeling using IBIS+AMI to design a system using DDR5 DRAMs such as the next generation of DIMMs. IBIS IBIS stands for I/O Buffer Informatiotenn Specification, although it is more of a word in its own right these days. It dates from the early 1990s. Back then, signal integrity was just starting to be an issue and companies like Quad Design (remember them?) produced the first successful commercial signal integrity tools, along with proprietary models and device libraries. In 1993, Intel decided that they didn't want a plethora of proprietary device libraries and so they invited other companies, including Cadence, to work together to specify a common standard. At the time, Intel was trying to specify the requirements for drivers for the then-new PCI standard. The result of this was IBIS 1.1, the initial standard. It included pull-up and pull-down transistors and their transition times, clamping diodes (that squelch reflections), and a model of the package pin (inductor, resistor, and capacitors). The input model was the same without the driver transistors. Version 2.1, in 1994, became an ANSI/EIA standard. It has developed further since. Equalization The diagram above shows the problem. The idealized input bitstream is a perfect square wave. The channel attenuates different frequencies by different amounts, and as a result, the signal that arrives at the receiver is very distorted. From that input signal, the clock and data has to be recovered. There are various forms of equalization that are used to cope with these losses in the channel. At the transmitter, some amount of pre-emphasis or de-emphasis can be done to compensate for channel loss. For example, often the pre-emphasis will boost the high-frequency components of the signal to compensate for the fact that the channel will attenuate them the most. The channel itself consists of the package pins and the circuit board trace, so nothing active can be done. However, the passive effects need to be modeled. At the receiver, automatic gain control (AGC) boosts the incoming signal enough that it can be detected, then continuous-time linear equalization (CLTE) is used to cancel inter-symbol interference (between one bit and the next). But the real smarts (and most of the area and power) are in clock data recovery (CDR) which regenerates the clock from the incoming analog signal, and decision feedback equalization (DFE) which uses a FIR filter and adaptively tunes the tap coefficients. But it needs a good signal to work with, hence AGC and CTLE on the front end of the receiver. The eye diagrom above is plotted with time along the horizontal and signal voltage vertical. The green Gaussians show where the CDR is deriving the clock transitions, and the red Gaussians shows how the 1 voltages and 0 voltages are distributed (separate distributions for each). The eye in the middle is open so long as the recovered clock correctly has the midpoint close to the middle of the eye, and the DFE keeps the two red peaks separated and tight, meaning that it correctly discriminates at the voltage b0 between signals above and below being 1 and 0. Obviously, if the clock recovery drifts too far, or the 0 and 1 voltages get too close, even occasionally, then bit errors will result and the eye will close (when millions of signals are superimposed). AMI AMI, the Algorithmic Modeling Interface, was an extension made to IBIS in 2007 to do a better job of modeling the channel. Cadence was at the forefront of driving AMI through the standardization process. The "algorithmic" part of the AMI name refers to the fact that it is executable code (which can be written in any language, although C is typical) that works along with traditional IBIS circuit level models. By using compiled code, as opposed to text files like IBIS, there can be deeper access to on-chip technology without disclosing any "secret sauce." It is also designed to allow plug-and-play simulation since typically the manufacturer of the chip containing the transmitter is not the same as the manufacturer of the chip containing the receiver. The problem that AMI addressed was that high-speed serial links became the dominant way to get data in and out of chips and memories, without requiring a huge number of pins (as a parallel approach would). This required lots of data traffic to be simulated, for three reasons: To ensure that the links would work reliably required creating eye-diagrams (like on the right). To guarantee open eyes requires simulating a lot of bits to make sure that the signal is always well below or above the eye, and that the regenerated clock is accurate enough that the mid-point is centered on the eye. The primary characteristic of a serial link is the bit-error rate (BER) which might be 1 in 10 -12 or 10 -16 . With SPICE it might be possible to simulate a few hundred bits, but tit is typical to simulate a million bits to get an accurate estimate of the BER. Multi-gigabit SerDes use adaptive equalization, not "fire and forget" equalization set up once at initialization and then left unchanged. This requires lots of data traffic before the equalization stabilizes and locks in, and that is before transmitting any real traffic actually begins. Adaptive equalization makes adjustments every thousand bits or so to adjust the clock regeneration to keep the eye centered, and tries to spread the peaks where 0 and 1 pass through the receiver to keep them well separated (and try and keep the distribution narrow to avoid signals sometimes narrowing the eye). Data rates have gone from 2.5 Gbps to 25 Gbps over about ten years, and will go up to 120 Gbps soon. Future designs go higher still, targeting 400 Gbps and even 1 Tbps (1000 Gbps). Signal encoding has gone from a single eye to PAM4 with multiple eyes, requiring greater precision. Bottom line is that you need to simulate very large bit-streams with very fast and accurate equalization models. AMI satisfies that requirement. Signal integrity analysis for serial links consists of three stages: characterizing the channel, then performing the large bit-stream channel simulation, and then the post-processing the output to check for open eyes and BER values. Characterizing the channel is done with an impulse response. A step is put into the input and a circuit simulator is used to get the step-response, and the derivative gives the impulse response and capture the behavior of any interconnect between driver and receiver. Channel simulation is done by convolving the impulse response with the bit stream to produce raw waveforms. Millions of bits can be simulated in minutes, even if there is complicated adaptive equalization going on. The diagrams above show how the parts fit together. DDR5 As I said at the start, in the DDR5 standard, which is expected to be published in summer 2018, DRAM will be specified to include DFE capability. That means you need an AMI model. How do you do that? Is there an easier way than opening a text editor and starting to code? That's the topic of a post coming one day next week. Watch out for it. Sign up for Sunday Brunch, the weekly Breakfast Bytes email.

Viewing all articles
Browse latest Browse all 6660

Trending Articles