Quantcast
Channel: Cadence Blogs
Viewing all articles
Browse latest Browse all 6681

The World's First Working 7nm 112G Long Reach SerDes Silicon

$
0
0
At the start of November last year, Cadence announced that it was acquiring nusemi, a company focused on the development of high-speed SerDes interfaces. Today, Cadence demonstrated working 7nm SerDes testchips running at 112 Gbps. It is hard to comprehend speeds like this. One way is to slow your computer's clock down to 1 Hz as I did in my post Numbers Everyone Should Know . Similarly, here, if we scale so that one bit is transmitted per second, it takes 3,500 years to transmit one second's worth of bits. It really is an incredibly high datarate. SerDes stands for serializer-deserializer. They are really two separate devices, the transmitter (serializer) and receiver (deserializer), although they are usually used in pairs to provide bidirectional communication. There are two separate reasons that SerDes have become so important in semiconductor design. The first reason is that as chips got bigger and more complex, and buses got wider, the number of pins required started to get out of control. The solution was to switch from parallel interfaces with a pin for each bit to serial interfaces running at very high data rates. This got the number of pins under control at the cost of additional complexity in the SerDes on each chip, and more complex board and package design, and especially signal integrity analysis. The second reason is the growth of huge datacenters. These require networking of various forms, from the servers to the top of the rack router, from rack to rack, and long-haul out of the datacenter across the country or the world. Again one of the limiting resources is space for the "pins", in this case the space on the front panel where the copper or fiber optic cables connect. For a perspective on this see my blog post Andy Bechtolsheim Keynote on the Future of Networking which contains a forecast of how different speeds of networking were expected to ramp last year. I recommend reading this for background if only to see how the transitions are happening an order of magnitude faster than was forecast. It is a side-effect of the cloud. As Andy put it: The technology won't be deployed at all if it is not cheaper, because in the cloud, any deployment is massive. The moment it is cheaper, nobody wants the old technology. The transition is literally 6 months. When SerDes were first created, they transmitted one bit at a time. You've almost certainly seen eye-diagrams that actually look like eyes. More recently, two bits are transmitted at a time using an encoding scheme called PAM4. That means that there are actually three eyes. See the diagram below with a single eye on the left, and PAM4 with three eyes on the right. 112G uses PAM4. What's the Roadmap for Datacenter Networking? We've all seen those graphs of how data is doubling every year, with scary exponentials going up and to the right for the foreseeable future. I could find one but it seems unnecessary. Bandwidth requirements in datacenters at all levels (top of rack, within datacenter, between datacenter) are all increasing. One way to address this is to use multiple lanes (cables/fibers): for example, run 100G Ethernet over two 50G connections, with a gearbox at each end. The other is to increase the speed of an individual lane. The table above shows how mainstream networking has and will evolve. Almost all new installation today uses 50G lanes (56G SerDes), often doubled up to get 100G capability (or more). With this new 112G SerDes, this will become a single lane. But multilane datacenter speeds are expected to continue to increase to 800G. And the pace of increase is accelerating. It took about 5 years to go from 10G to 25G lanes, 3 years to get to 50G lanes, and it is expected that it will take 2 years to get to 100G lanes once IP like the SerDes we are announcing gets incorporated into SoCs and then into datacenter equipment. In the really big picture, the driver is cost. Obviously, you won't build a 200G interface using 20 lanes of 10G. But the two big limiters driving the transition between network speeds are really power and front-panel space. The cost of ownership of a datacenter over its lifetime is only about 25-35% the cost of the hardware (capex). The rest is operating expenses, mainly electrical power, of which 55% for the equipment itself, another 30% for cooling, and 10+% for electrical distribution (including battery backup). Front-panel space is limited by the 42U rack with racks at 19" pitch. Like the old adage about investing in land, front panel space is valuable because they aren't making any more of it. Higher data rates are one way to get more data through the same amount of front-panel space. What Is Long Reach? It is obviously easier to drive a little bit of copper trace on a board from one chip to another, than to cope with the messy (from a signal point of view) environment of running from one chip, to a connector, through a cable or backplane, through another connector, and then to the receiving chip. There is high channel loss and low signal-to-noise ratio (SNR). The above diagram shows the different "reaches": ultra-short reach (USR) is inside a package very short reach (VSR) is chip to the driver module (for either copper or optical) with a single connector, less than 10cm medium reach goes up to 25cm, either chip to chip or chip to module (with one connector) long reach involves backplane or copper cables and allows two connectors I didn't just make up these names for distances. They are defined by the OIF-CEI (optical interconnect forum - common electrical interface) standards. The 56G standards are mature, but the 112G standards are still being finalized. What Did We Announce? It's working! We have a 7nm test chip back and it is working. We are not talking about simulation results or something else hypothetical. Above is a picture of the device in our labs. Unlike those ads on TV for smartphones (and TVs) that put a little "picture simulated" in the small print, this is the real eye-diagram. By the way, I don't know about this precise piece of test equipment, but searching the part number online they appear to be about $30,000 for a refurbished one (plus $500 shipping). At GOMAC last year, I talked to another person from Keysight who had driven up to Reno from Santa Rosa. He couldn't fly since he wasn't able to check as baggage a piece of equipment worth "seven figures." He didn't dare stop for a coffee. That test equipment is expensive! I suppose it is not that surprising since it is a limited market—how many people need a 100+ GHz scope? IC design software is the same. After all, I work for a company with products in that sort of price range, and you don't even get a cool-looking box with knobs and a pretty display (unless we are talking emulation and those Palladium boxes look great—but no knobs or display). Technical Specifications 7nm silicon, the most popular node for advanced HPC SoCs PPA optimized for 112G LR and MR Best-in-class >35dB insertion loss Firmware controlled adaptive power optimization Supports entire Ethernet range 10G/25G/50G/100G 112/56G PAM-4 and 56G/25G/10G NRZ RX FFE + DFE and TX FIR Support for N-S and E-W placement Fully autonomous startup and adaptation without requiring ASIC intervention Integrated BIST capable of producing and checking PRBS (pseudo-random binary sequence) Learn More For more information, see the product page . Sign up for Sunday Brunch, the weekly Breakfast Bytes email.

Viewing all articles
Browse latest Browse all 6681

Trending Articles