Quantcast
Channel: Cadence Blogs
Viewing all articles
Browse latest Browse all 6662

Solving the Design to Manufacturing Problems in Milpitas

$
0
0
HOT NEWS: In case you missed it, right at the end of last week, British GPU and CPU (MIPS) maker Imagination was brought by Canyon Bridge, Chinese money but run out of Palo Alto. One of the partners is Ray Bingham, who used to be CEO of Cadence (and my boss) for much of my last tour of duty at Cadence. They were also the fund that tried to purchase Lattice, but that was blocked by CFIUS. If you don't know what CFIUS is, or you expected them to approve an acquisition by China, then you don't read Breakfast Bytes attentively enough. Read my post, The Four Ts . We now continue with our previously scheduled programming.... EDPS is the Electronic Design Process Symposium. For years, it has been held in a hotel on the beach just north of Monterey. Gary Smith, when he was alive, was apparently one of the most vociferous about keeping it in Monterey and not bringing it up to Silicon Valley. Well, unfortunately Gary Smith is no longer with us, and EDPS took place this year at SEMI's new headquarters in Milpitas. Talking of Milpitas, when I first came to the US, it was a lot of waste ground and some truck stops. McCarthy Ranch, now a huge shopping center, was a still a ranch, where you could take the kids and go and pick your own peaches. It was such a joke that Sesame Street's unctuous game show host, Guy Smiley offered "First prize, one week in Milpitas California. Second prize...two weeks in Milpitas California." A few years later, in the mid-1980s, it was the city with the fastest increasing median income in the entire US. From being roughly equivalent to Needles, California, it now is the corporate headquarters for GLOBALFOUNDRIES, and for SEMI, SanDisk (now Western Digital), KLA-Tencor and probably others. And that's sticking to semiconductor. Plus it was the headquarters of Linear Technology, before it was acquired by Analog Devices, which I believe held the crown for being the most profitable semiconductor company (and I can't find the current data, but I believe Analog Devices is now the most profitable semiconductor company). Go Milpitas. EDPS is run largely with volunteers and no money. Somewhere in the dim and distant past a logo was designed, with dolphins to reflect its Monterey location. Well, the logo lives on, but good luck finding a dolphin in Milpitas. There is a Dolphin Technology, which is a semiconductor IP provider, but unfortunately they are in San Jose, not Milpitas. First Day I was unable to make it for the first day, until the dinner keynote. Since the dinner keynote was Jim Hogan reprising his presentation from SJSU from the evening before, about the 4th Industrial Revolution, I'll cover that separately. But here is what you (and I) missed. Actually, I have all the slides, but since they give them to attendees on a thumb drive, and Cadence disables thumb drives on our computers, I couldn't even look at them until I got home after the conference. The opening keynote was by Antun Domic, who is now the CTO of Synopsys. He talked about 10nm, 7nm (or 70Å, as he called it—maybe we are all going to have to switch again), 5nm, EUV, 2.5D and 3D integration. The summary: FinFET will last until 7nm, gate-all-around (GAA) is expected from 5nm EUV is here and will be in second generation 7nm...but it is expensive technology and there are still issues with pellicles, resists, metrology, mirrors Multi-patterning will be back at 5nm, even with EUV 3D-IC is very attractive with bandwidth requirements and being used in many applications, especially high performance: CPU, GPU, TPU, Network Processors (NPU).. Big physical design challenges on the horizon (1T transistors at 3nm with 100B placeable instances. Transistors shrink much faster than interconnect so routability is a big problem...design rules will be insanely complex The rest of the morning was about design acceleration. After lunch, to keep people awake, there was another keynote, by Zoë Conway of Cisco, who are using increasing amounts of "More than Moore" technologies. Not just 2.5D and 3D integration, but also optics and photonics, MEMS, RF, power. All of this makes her job, which is to make all this stuff manufacturable in high volume, harder and harder. One big challenge, which EDPS came back to in the closing panel, is the ATE to system differences. If you test components on testers, you have a controlled environment with a carefully designed test program, in a temperature controlled environment. Then you put it all on a board or a rack and test it with all the components interacting, lots of noise, and the temperature varying as parts heat up, depending on the workload. This is a big problem, as you can see from the green "slice" in the chart below, which is an analysis of what causes failures at system test (the ASIC might be designed wrong but with a compatible test, or the board might have a defect. But the biggest is that a part passed on the tester, was put into the system, but was faulty). Automotive is changing things. Of course, nobody likes it if their server or router goes down in the datacenter, but when you have 100,000 of them in the same building, that is a daily occurrence. Higher levels of the system have to recover for that. But automotive is coming. That takes all these problems to an even more challenging level. That was followed by two more sessions, on Driving for Higher Yield, and on Accelerating Debug and Validation at Quality. In past years, EDPS has mainly been about EDA and designing the chips. This EDPS was different in that there was a much higher representation from manufacturing people. Yes, there were EDA people there too, but the focus was on how do you get good quality designs out of the EDA flow and into high volume manufacturing, given the increasing challenges. Pankaj Mehra Keynote Pankaj took a look at Data-Centric Computer Architecture. Remember what we used to call datacenters? Computer centers, or maybe compute farms. But now data is the most important. People have noticed for years that you can have "free" processors in your memory subsystem if you could think of something useful to do with them. In fact, he pointed out, a typical solid-state disk (NAND flash disk) already has 7-19 cores inside it. Surely we could do something better with them than just flash control? Since today we can only process data by moving it to the "main" CPU, much of the power is wasted just moving data back and forth. What if the SSD could do some database operations (especially the ones that produce far less data than they started with) inside the disk themselves? Samsung actually built storage-compute devices, but since figuring out how to use it had to be done by hand, it never really worked. As Pankaj said: If it doesn't scale, it fails The way we consume compute is through a virtual machine (eg Amazon, VMware). Networking is the same way. But for data this is less common. When virtualization is done, like Elastic Block from Amazon, then there is a low level of abstraction. So today, for example, networking can buy (eg) load balancing, and make thousands of things look like just one. But consuming data largely, it is a block device. If you move higher then it is really a compute service, and requires the CPU to run a file system or a database. You just end up with Linux type device driver access. Even at the raw device level, there are lots of cores, but no abstraction level that knows how to give work to the devices automatically. There are also very different types of data. It is most useful when it is fresh. It is frequently accessed for maybe 72 hours. Then there is archival storage, which is almost never accessed but needs to be kept around just in case. Pankaj didn't use the word, but I've heard this called "cold storage." Most computation actually happens when you first get the data out of storage, but usually we build systems out of components and discard most of the performance. NAND die in an SDD can have terabyte/sec of read bandwidth, and then we throw it away and only exposing 1000 th of it, since we build systems in compute-centric way. It was unclear what the magic bullet is here. Obviously, you want to take compute close to data. But exploiting memory bandwidth requires rethinking memory management. But then you have to rethink everything. Pankaj had some numbers on what ratios look good: 4-16 cores per terabyte. A von Neumann architecture is compute that remembers; we need memory that computes. His final few bullets, a research program for the next 3-5 years: pressure to get even lower power (in particular, evolving world requires always online learning algorithms) pressure to get even higher performance (ad hoc queries against petabytes of data) compilers and runtimes do not even recognize this as a problem yet...but leaders in industry and academia think it is one of the most important ones e.g carefully placing matrices, vectors, records so that calculations can be done without data movement then...add memristive logic The rest of the day The rest of the day was taken up by two more sessions: Machine learning in manufacturing and design (Cadence's David White being one of the speakers) EDA and IC Design, Manufacturing and Test But this post is too long already so I'll cover that later this week. Sign up for Sunday Brunch, the weekly Breakfast Bytes email.

Viewing all articles
Browse latest Browse all 6662

Trending Articles