Quantcast
Channel: Cadence Blogs
Viewing all articles
Browse latest Browse all 6664

Moore's Law at 50: Are We Planning for Retirement?

$
0
0
There has been lots of speculation, especially this year with the 50th anniversary of Moore's Law, as to whether it is over. This is not based on technical feasibility, nobody seriously doubts that we can go to at least 5nm without drastically changing everything we do. Of course it will be nice if EUV arrives in time, but without it we still know how to do lithography. Rather it is based on economics. I think we have all seen graphs showing costs going up after 28nm. For Moore's law to remain on course, those costs must continue to fall when measured per transistor. The opening keynote of IEDM was by Greg Yeric of ARM Research in Austin titled "Moore’s Law at 50: Are We Planning for Retirement?" One thing that he pointed out is that all the graphs showing Moore's Law going in the wrong direction stem from a single IBS report. This is based on a model. There is simply not a lot of publicly available information from the foundries. Intel says it is on track for them but their wafer cost is a closely guarded secret and they have such a high margin on a lot of what they produce. The large foundries don't have any sort of public price list to read the data off of. One thing that I had not realized, for example, is that the next generation of steppers will be 50% faster than the current generation. Since a lot of the worry about Moore's Law ending is due to lithography costs for multiple patterning, this will make a big difference. As Greg pointed out, history is littered with prognostications that Moore's Law is about to end, overestimating the importance of early data on new process nodes and underestimating the amount of improvement and yield learning that takes place with volume. Another aspect is that design and NRE (mask) costs have been rising, so between manufacturing economies of scale and amortizing NRE, the production volume is a crucial aspect of looking at the problems. Greg's graph below shows that cost of transistor looks good at high volume (100M units) but at low volume (0.3M units) it is much flatter. His view is that faster mask write times are on the horizon (so cheaper masks) and that improvements to IP and implementation flows in particular can recoup a significant fraction of a node cost. The importance of volume is shown even more dramatically in the graph below, which shows the cost at constant area for various volumes. The message from the top two lines going off the chart is that low-volume manufacturing is just not going to be economically at these advanced nodes. In a sense, Moore's Law for low volume has come to an end, but semiconductor manufacturing has always been a mass-production process. (btw there is an error in the key, the last line should say per node not per year). The attendees at IEDM are mostly either designing single devices (transistors or similar) or are designing small area of interconnect (such as new vias, or new metallization). Greg begged that people need to consider the system aspect. I think most attendees don't know what the system is. One thing that I'd never thought of explicitly is that device engineers worry about the center of the guassian of the performance of their transistor. They worry much less about the spread, that's not how they think. But design engineers worry about the spread a lot, maybe not explicitly, but they care about three standard deviations (sometimes more) above and below the center and don't care much about where the center is since they have to worry about best- and worst-case performance (of course today, there are many corners to worry about and this is an over-simplification). In his conclusions, one of the things he begged was for people designing devices to worry much more about the standard deviation and not just the mean, as it were. Next, he moved onto interconnect, examining likely serviceable wirelengths by transistor size across technology generations, as in the graph below. His point is subtle, that since we need to put a lot of buffering in to get a signal any distance, worrying about cost per transistor is not necessarily even the best metric since we are having to waste so many of them in a way we would not if we had better interconnect. This echoes a point I'm increasingly hearing from places like imec, that more investment needs to go into the back-end-of-line (metal) as opposed to just better transistors. Memory, too, is challenged by wire scaling. Finally he moved to design. Another thing he pointed out is that we can do a much better job at the system level if we have choices, such as low power, low leakage, slow devices, and very fast devices. Or different cost metal layers. We can build better libraries, implementation tools can do a better job, and optimize the PPA. He covered much more than this in his whirlwind tour of how system design interacts with design of transistors, interconnect, memories, and other aspects of the process. His final conclusion: In face of the slowing of fundamental scaling, we can expect significant change and more diverse options in the underlying process technology, percolating upward to the circuit, system, and applications software levels. This dynamic will increase the value of top-down involvement in co-optimization for emerging applications. Using heterogeneous technology options to more directly target a widening set of application needs can compensate for the slowing of technology scaling, in the process helping to define the best technology paths, but at the same time challenging us to fragment and possibly dilute economies of scale.

Viewing all articles
Browse latest Browse all 6664

Trending Articles