One of Alessandra Nardi’s favorite quotes is, “You can’t direct the wind, but you can adjust the sails.” An R&D group director on Cadence’s electrical signoff team, Nardi is inspired by the philosophy behind this German proverb. Nardi, who has a PhD in electrical engineering from UC Berkeley, has certainly adjusted the sails of her engineering career, embracing a diversity of EE disciplines along the way. We sat down recently to discuss her work so far in helping to overcome the obstacles of electrical signoff. Listen in. Tell me about your educational and professional background. I’ve always liked math, science, and physics, and subjects that are very quantitative. You always have an explanation and they have to be supported by facts. Like with bugs—you can build a test case to prove or disprove something. I debated between physics and electrical engineering, but engineering is more diverse. I joined Cadence in 2013 after working at Magma. I have a background in power integrity, timing analysis, and library characterization. What are your key responsibilities at Cadence? I drive strategic projects that can help improve our current solutions or create differentiation for our products. These improvements can also cross boundaries of specific products. For example, I’m currently working on improvements to the Effective Current Source Model (ECSM) power model together with the Virtuoso Liberate team in our custom products group. When it comes to power integrity, what are customers struggling with the most? Accuracy is the most important thing because you are the last gate before taping out. What makes it really challenging is the size. If you look at the power grid, the number of nodes is easily two billion. Additionally, SPICE-like waveforms are used to model the current supplies accurately. With these kinds of numbers, runtime and memory can become prohibitive. So capacity and runtime need to be superlative to enable this type of analysis. Otherwise, if the power integrity tool is not efficient, it can be a bottleneck. Customers tell us they don’t care about power integrity as long as it doesn’t impact timing. But a lot of the engineers who specialize in timing are not power integrity experts. There might be one person on a team who is an expert, so ease of use of the tool is important. So, too, is automation, in terms of automatic fixing capabilities. How is Cadence addressing these customer challenges via the Voltus IC Power Integrity Solution? The Voltus solution has a specialized SPICE solver that guarantees the accuracy of the solution. At the same time, it supports distributed processing and multi-threading that guarantees excellent runtimes with no loss of accuracy. The Voltus solution also supports the diverse set of functionalities that are part of power integrity, such as electromigration analysis, electrostatic discharge analysis, and power-up analysis. The feature set is very rich and addresses customer needs to ensure their power grid is properly designed via power profiling, vector-less and vcd-based power analysis, effective resistance analysis, etc. As chips continue to grow larger and more complex, what are some new timing signoff challenges that design engineers will need to address? Engineers used to assign a budget for voltage variations, and as long as your grid could support that, things would be fine. Guaranteeing that margin is getting more and more complex as routing resources are limited. So the request we hear more and more often is to actually now interleave power integrity analysis with timing analysis , to relieve some of the pressure on voltage variation budget and only fix the power grid when it affects timing closure. More and more, there are mixed-signal designs. Now, you cannot ask a digital designer to learn both tools, and you cannot ask an analog designer to learn both tools. How do we make the interface more seamless in our tools? We are doing more integration in our tools to address this. Package and board design also affects power integrity analysis. These are typically handled by two separate groups—the chip-centric team gets a package model and the package-centric team gets a die-model. Connection and integration of these tools needs to be painless and seamless. What does an EDA vendor need to do to stay on top of emerging signoff challenges? It is a given that we need to keep innovating the technology for even better capacity and performance, but it is also key that we offer a holistic solution that goes beyond the individual signoff analysis by itself and covers the challenges that are going across boundaries of different domains. Some examples are what I mentioned before: joint power and timing analysis, more automated power-integrity fixing capabilities, seamless integration between digital and mixed-signal solutions, easy connection between the chip-centric and the package-centric domains. With this in mind, I found it quite useful over the years to have expertise in different areas: it is easier to connect the dots, understand the overall requirements, and drive to create complete solutions and flows. There’s been much discussion about the dearth of women in technical fields. What advice would you give to a woman who’s interested in pursuing a career in engineering? They key is to do it if they like it. They need to have fun, enjoy going through the day. If there are challenges, whether you’re a woman, an immigrant, or whatever, try to improve the situation by looking at things you can control. Diversity can be fruitful if there’s a respectful relationship. Be yourself, listen to others, share what you think. What, in your opinion, makes an engineering career rewarding? With the combination of expertise and the tool, you can actually beat the competition—that is rewarding. It’s also a combination of technology and people skills. What I like about engineering is that you can have some of both: you can have technical discussions and you can also win with relationships, with building a team, making them happy. I can gain trust by showing we are competent, we know what we are doing. You build relationships of trust with colleagues, customers, and your competitors. Christine Young
↧
Q&A: Clearing the Obstacles to Electrical Signoff
↧
The Design that Made ARM
I sat down with Simon Segars, the CEO of ARM last Friday. As I said yesterday , it is ARM's 25th birthday this week, on Friday if you want the precise date. Although today, of course, we think of even the largest ARM processors as something to embed in an SoC, when the first ARMs were created they were standalone processors taking up the whole die. The ARM1 and ARM2 were just processors. With the ARM3, there was room for a cache. By the time ARM was spun out of Acorn, they were working on the ARM6, a processor and cache, intended for the Apple Newton. Unfortunately the Newton was way ahead of its time and was not a commercial success. The turning point for ARM was the ARM7. Actually the ARM7TDMI. What do all those letters mean? D was for debug, allowing for JTAG-based debugging. M was for multiplier, it had a fast hardware multiplier. I was for Icebreaker, a sort of on-chip in-circuit-emulator that allowed for hardware breakpoints and watchpoints. But the most important letter was the T that stood for Thumb. All ARM processors were 32 bit, but that meant that all instruction fetches were 32 bit and so every instruction took up 32 bits. This meant that the code size was large. A joint project was kicked off between Nokia, Texas Instruments, and ARM to address this. Nokia had decided that it wanted to use a 32-bit processor in future phones and TI was their semiconductor supplier. This was an aggressive decision at the time, since most cellphones used 8-bit processors. For example, Nokia's big Scandinavian competitor Ericsson used the Z80. But Nokia reckoned that by moving more of the logic into the processor they would improve overall system efficiency. But the code density was a big issue. This motivated the creation of Thumb. This was a mode in which the processor could execute a limited set of 16-bit instructions. The code density was much higher and so the system required less memory and was smaller. This was despite the fact that the ARM7TDMI processor itself was bigger than a pure ARM7 since the it required the Thumb instruction decode in addition to the normal instruction decode. The lead designer for the ARM7TDMI and its eventual project manager was...Simon Segars. He wanted to show me the original ARM7TDMI testchip which normally lives in his office, but it is currently on loan to a museum in Cambridge for a 25 years of ARM exhibition. The project had about 10 people on the hardware design, and another group working on software. The Thumb instruction set required assembler, compiler, and debugger support so hardware and software were being developed at the same time. The ARM7TDMI came out in 1995 (for some reason Wikipedia says 1998) and a synthesizable version (per the ARM website) in 1998 (Wikipedia says 2001). My memory is that ARM's history is correct and Wikipedia is wrong (the internet is wrong !). Simon thinks that over 30 billion ARM7TDMI chips have shipped, making it the biggest selling microprocessor of all time, at least in terms of unit volume. It was also the turning point for ARM as a company. The ARM7TDMI became the "standard" microprocessor to put in cellphones, not just at Nokia but everyone else. Even Ericsson switched from Z80 to ARM. VLSI Technology's GSM chipsets were ARM7TDMI from the beginning. The cellphone industry entered its period of high growth and ARM found themselves sitting on top of the rocket. But it was not just cellphones. The semiconductor companies who licensed the ARM7TDMI and then were not successful in mobile found ways to use it in other industries. Increasingly, the ARM7TDMI was the standard microprocessor for any semiconductor company that didn't already have its own internal microprocessor design team, which was most of them. Then, gradually, over time, ARM became the standard microprocessor for almost everything except Intel-architecture PCs and servers. Of course, ARM has its eye on those servers, too. Simon told me that the end markets clearly want an alternative, and one that will be around for a long time. ARM has been investing in making building servers both easier and more standard, in particular with the server-based system architecture (SBSA). He also told ARM's CIO to get arm.com running on ARM servers and find out what management software and other infrastructure is missing. Lots of semiconductor companies such as Cavium and Qualcomm have ARM-based server products available and there is a lot of evaluation going on. One area where ARM actually has a potential advantage is China, which is increasingly wanting to favor local suppliers. Did you know that the value of China's import of semiconductors is bigger than the value of China's import of oil? Yes, a lot of them get re-exported, unlike the oil, but it is still an extraordinary statistic. The growth in servers is not just driven by people like Google and Facebook but by mobile and internet of things (IoT), which have requirements for devices, cloud back ends, and new network architectures with less latency. All of us are going to generate a lot more data than we do today and this will all need to be moved around and processed. ARM started with 12 people. It is now about 4000 people. Although it clearly has a British heritage and is in some sense a British company, only 1500 of those 4000 work in the UK (Simon is not one of them, he lives in the US, although I think he pretty much lives on planes). The rest are all over the world. It has been quite an eventful 25 years.
↧
↧
Cheating Tetris
Remember Tetris? We’ve all played it at some point in our lives. You know, the game with falling blocks of different sizes and shapes where you have to place the incoming blocks in an optimal way to make full use of the available open spaces. Well, once the incoming rate of the falling blocks increases, you inevitably start to place them non-optimally and create ‘holes’ in the build. From a simple computer game to a global 30-year phenomenon, Tetris is loved globally by people of all ages and cultures, and continues to be one of the most widely recognized and distinctive video game brands. But, what if you could cheat? Figure 1: Tetris game Source: http://tetris.com/about-tetris/tetris-effect/ What if you can alter the shape of the incoming blocks? For instance, while a 4-boxed “L” shape block drops down, change the shape into any other 4-boxed shape, allowing you to place the block much more optimally amongst the open spaces. That will significantly lower the chances of creating the “holes,” and you’d score rather high. But, that’d be cheating, right? But, if you want the highest score, you gotta do something different. Cheating Tetris An analogy to the Tetris game is the task of submitting jobs to a compute server. In the EDA space of acceleration and emulation, users build (or compile) their verification jobs and submit their jobs to a verification computing platform, cross their fingers, and hope that their jobs get allocated to an available set of computing resources in a timely manner. Well, in this Tetris game of dispatching verification jobs into a queue and allocating hardware resources for the jobs, the game has just gotten a little more interesting. Cadence has advanced that game. With the next-generation Palladium Z1 architecture, incoming jobs of variable sizes can be re-shaped, allowing a more optimal placement in the computing platform. What’s more, the re-shaping task is automated so you don’t have to even think about how to re-shape the job. It creates the high score for you! How Does it Work? When an acceleration or emulation job is compiled for a set of “domains” on the Palladium Z1 engine, it assumes a certain physical shape. That shape is determined by the domains chosen by the engineer compiling the job. So, say the compiled job was shaped as the purple shape in Figure 2. The Palladium Z1 software automatically can re-shape the compiled job to occupy different domains – several alternative shapes are shown in green. You don’t have to lift a finger to identify which alternative shape can more optimally use the hardware to achieve the highest utilization score. At the end of the day, the payoff can be measured as high utilization of the computing platform, higher than any acceleration/emulation platform in the industry. To learn more, contact your Palladium sales team. We’re standing by. --------------------------------------------------- Get more details about the Palladium Z1 platform here .
↧
50 Gbps Ethernet is on the Way
Here is my report from the most recent IEEE 802.3 standards meeting, which was held in Dallas during the week of November 9. The big news is that work on 50G Ethernet is now about to start. The 25G 802.3by project will soon be ending and plans are afoot for the team working on 802.3by to start thinking about 50G Ethernet. The 802.3bs 400G project has adopted technology for 50G per lane operation. So it follows that 50G single-lane Ethernet PHYs should be created using this technology. The first stage in creating an 802.3 standard is a call for interest (CFI) at an IEEE 802 plenary meeting. This was done at the recent meeting in Dallas with substantial interest being shown. When the consensus building presentation was made on the Tuesday night, over 100 people indicated that they would work on a 50G project. As a result, two study groups have been approved to develop suitable objectives for 50G per lane Ethernet: 50Gbps Ethernet Over a Single Lane Study Group Next-generation 100Gbps and 200Gbps Ethernet Study Group The first study group is expected to lead to a new project (task force) to develop 50G single-lane PHYs, while the second is likely to amend the 802.3bs 400G project’s objectives to include 100G and 200G PHYs based on 50G per lane technology. It makes sense to do this work in the 802.3bs task force because that project is already focused on developing PCS and PMA solutions for multi-lane PHYs. Also, adding 100G and 200G PHYs to 802.3bs will make it more relevant as the initial 400G market is expected to be small. The single-lane 50Gbps consensus building presentation can be found here . Once a study group completes its work, a task force is formed to write the standard. The draft standard goes through three cycles of review: The early draft (usually numbered as draft 0.x or 1.x) is reviewed and “commented” on by the task force that produces it The technically complete draft (usually numbered as draft 2.x) is voted (“balloted”) and commented on by the wider 802.3 “working group” Once approved by the 802.3 working group the draft (usually numbered as draft 3.x) is then balloted and commented on by the even wider 802 “sponsor group” In addition I should report that progress is being made in automotive Ethernet, where there are five relevant projects: 802.3bw to standardize the 100BASE-T1 (100Mbps) automotive PHY is now complete. The IEEE-SA Standards Board Standards Review Committee gave final approval to IEEE Std 802.3bw-2015 on October 26. 802.3bp to standardize the 1000BASE-T1 (gigabit) automotive PHY is making good progress. The standard is technically complete and will soon go to sponsor group ballot. 802.3bu 1-Pair Power over Data Lines (PoDL), is now technically complete and has started working group ballot. Dave Dwelley, the task force chair, likes to refer to this as the “poodle” project. He gave a pun-laden tutorial on this project, which can be viewed here . 802.3bv to create a standard for gigabit Ethernet over Plastic Optical Fibre (POF). This project is making slower progress and is still in the task force review stage. 802.3br for frame pre-emption. This is useful for sending control frames at defined time intervals. The standard for this is technically complete and will soon go to sponsor group ballot. Cadence is actively participating in the 802.3br project. Arthur Marris November 2015
↧
Voltus-Fi: Faithful Custom and Analog EMIR and Power Analysis
First things first. Voltus and Voltus-Fi are two separate products. They are both used for EMIR analysis, Voltus for digital design and Voltus-Fi for analog design (or custom transistor-level digital), or, in conjunction with with Voltus for mixed-signal designs. EMIR stands for electromigration and IR drop analysis. The "Fi" in Voltus Fi either stands for "fidelity" as in "Hi-Fi" or it is the Latin word for faithful, as in the US Marine's motto "Semper Fidelis" (always faithful) often abbreviated to "Semper Fi", your choice. Either way, it is that the analysis is accurate. The full-name is Cadence Voltus-Fi Custom Power Integrity Solution. When current flows through high-resistance metal (and all metal has some resistance), then there are two things to worry about. Electromigration is a phenomenon where the electrons flowing through the metal, usually copper these days, literally cause the ions of metal to move in the direction of the current since some of the momentum of the electrons gets transferred. The problem with this is that if there is a narrow neck in the metal, the current there will be higher. This, in turn, causes metal to move away from the neck making it even narrower. This is positive feedback: the narrower the neck gets, the more the metal migrates and the neck gets narrower still. Eventually it can open completely and (probably) will cause a hard failure of the chip. The phenomenon is worse at higher temperatures, so in military and automotive markets it is necessary to be even more cautious. Each process has complex rules for exactly how much current is allowed in each layer, each via and so on. You would think that this would be symmetric, current flowing one way through a via would have the same limit as flowing in the other, but modern processes are so complex now that even that is not true. The other effect of a current flowing through resistive metal is that the voltage will drop. This can cause the voltage, especially for the power supply, to drop so low that it is under the spec voltage for the cells in the design and can cause intermittent failures as a result. Additional analysis is needed for designs where blocks are powered down to make sure that the inrush currents when a block is re-activated don't cause the voltage to sag so much on other parts of the chip that they malfunction. Another aspect of analysis of currents and resistances is the power dissipated. In turn, this contributes to an increase in temperature that can be examined by creating a thermal map, a two-dimensional representation of the SoC with the temperature indicated from blue (cold) to red (hot). FinFETs have additional thermal issues due to self-heating effects. The way that Voltus and Voltus-Fi are linked is through an interface known as PGV for Power Grid View. This is a macro model that Voltus-Fi creates doing a full analysis in the analog world, but that can be made use of by Voltus when doing analysis of the whole chip (or a mixed-signal block) and can also be used by Innovus for ECO. Voltus-Fi can also be used for custom power signoff. This gives SPICE-accurate transistor-level power signoff. It is integrated into the Virtuoso flow with extraction, EMIR, simulation (using Spectre APS, Accelerated Parallel Simulator), visualization, analysis, debug and fix. Plus, as already said above, it can generate the PGV for use in the digital implementation flow. One user of Voltus-Fi is Phison Electronics. Phison is a company based in Taiwan that primarily manufactures controllers for NAND flash memories. It used Voltus-Fi to achieve silicon-proven accuracy on EMIR checks for its most advanced flash controller chips. It reduced its tapeout schedule by eight weeks, improving time to market by 40 percent, while also increasing overall product reliability. But perhaps most important is making sure not to miss problems that might lead to respins. As Aw Yong Chee Kong, president at Phison Electronics, put it: Using a low-power design methodology and maintaining power-grid integrity have become critical design requirements for signoff, especially at lower process nodes. By incorporating Voltus-Fi into our design flow, Phison engineers have been able to find design weaknesses such as potential voltage drop and electromigration failures, preventing costly silicon re-spins. Learn more about the Voltus-Fi Custom Power Integrity Solution .
↧
↧
Can You Pass As a Brit? Just Answer 3 Simple Questions
It’s Thankgiving! Happy Thanksgiving if you are reading this on the day. Cadence is closed, of course. I’m working on blogs for next week (yeah, right). But I thought I’d put out a fun blog. This has nothing to do with EDA or the semiconductor ecosystem or even Cadence. So a few years ago I saw an article by a guy who lives in London and has a lot of American friends. They like to think that they have gone native and know England (or maybe the UK, because they probably don’t know the difference) really well. So this guy devised a three-question test for Americans as to whether they could really say that they knew the country as well as the natives. Here they are: What do the letters LBW stand for? What happens on November 5 th and why? What is Marmite and do you like it? Don't read on immediately, try and answer the questions first. So here are the answers. LBW : It stands for Leg Before Wicket. It’s from cricket. You’ve heard of cricket, right? It is the summer sport. Sometimes it is described as the British equivalent of baseball, but that would technically be rounders, that is only played by girls in middle-school, with roughly the same rules. The ten-second summary of cricket is that the batsman has a bat like in baseball (not the same shape), and if ever the ball crosses home plate it hits an ancient wooden contraption, called the wicket, that collapses and you are out. There are two wickets, 22 yards apart (one chain in old units), and the equivalent of running around the bases is to run between them back and forth. You can also be caught out (just like in baseball) and run out (pretty much like failing to make a base). There are a few other weird ones but they are roughly the equivalent of the infield fly rule. Unlike in baseball the ball is bounced off the ground in front of the batter, and spinning the ball can be important (as any baseball pitcher will tell you, too). So the obvious temptation is to block it with your (padded) leg. So being out LBW means that your leg blocked the ball and otherwise it would have hit the wicket. Leg Before Wicket. It is the most complicated rule in cricket because it has more exceptions than a Congressional tax regulation, and if you really want to know, Wikipedia has more, way more, than you will want to know. November 5 th : It’s fireworks day. We burn bonfires with effigies of a guy. Not a guy, but a Guy. The effigy is actually Guy Fawkes, whose stylized mask is the one used by Anonymous, via V for Vendetta . So why would we do that? America lets off fireworks on July 4 th (plus new year, start of baseball season at the ballpark, and stuff). Independence Day. So when was the English Independence Day? We did actually have a revolution, known as the Glorious Revolution, and it has been very significant for the changes in the rule of law. In 1688, James II of England (James VII of Scotland, I told you didn’t know the difference between England and the UK) was overthrown. The Dutch invaded and William & Mary became monarchs (yes, the only time in UK history a husband and wife have been joint rulers). James (and parliament) introduced a bill of rights declaring among a lot of other things, that: the pretended power to dispense with Acts of Parliament is illegal; the commission for ecclesiastical causes is illegal; levying money without the consent of Parliament is illegal; it is the right of the subject to petition the king and prosecutions for petitioning are illegal; maintaining a standing army in peacetime without the consent of Parliament is illegal; Protestant subjects "may have arms for their defense suitable to their conditions, and allowed by law"; the election of MPs ought to be free; that freedom of speech and debates in Parliament "ought not to be impeached or questioned in any court or place out of Parliament"; excessive bail and fines not required and "cruel and unusual punishments" not to be inflicted; jurors in high treason trials ought to be freeholders; that promises of fines and forfeitures before conviction are illegal; that Parliament ought to be held frequently. You will note a lot of similarity to the US constitution and amendments. Many of these clauses, such as number 6, equivalent to the 2 nd amendment, still exist as freedoms in the US but no longer in Britain. Some phrases are copied verbatim such as “cruel and unusual punishment”. Number 10, no fines and forfeitures before conviction would be great in the US, too. It turns out that today the police seize more money without convicting people through civil forfeiture than the genuine thieves manage to get away with . The similarity is not coincidental. The Federalist Papers and the anonymous people who wrote them (assumed these days to be Madison, Hamilton and Jay) were very influenced by this bill of rights. William (and Mary) was only king (and queen) for a year, so the revolution didn’t lead directly to today, but it’s still regarded as a year on a par with Magna Carta (800 th anniversary this year) as a key event in changing the balance of power between king and people. Since the most successful revolution we have had didn’t last, we let fireworks off on a failed one: Remember, remember, the fifth of November, Gunpowder treason and plot I know of no reason that gunpowder treason Should ever be forgot. And it hasn’t been forgotten. So what was the gunpowder plot? On November 5 th 1605, it was a failed assassination attempt, led by Robert Catesby, against King James I of England (and VI of Scotland). He was not the James of the glorious revolution, the one before. The plan was to blow up the House of Lords during the state opening of England's Parliament as the prelude to a popular revolt during which James's nine-year-old daughter, Princess Elizabeth, would be installed as the Catholic head of state. Guy Fawkes had a lot of military experience and was in charge of the explosives, gunpowder, in the cellars under the Houses of Parliament (not the building you think of today, that was built in the 1800s). But an anonymous letter revealed the plot and Guy Fawkes was captured. He was tried and executed. They decided to behead him, but since they’d already executed him, they had to dig him up again to do so. And on every November 5 th , bonfires are burned all over England and fireworks are set off to commemorate defeating this plot. Although to be honest, i expect most people just know that firework’s night is November 5 th without a clue why. It is getting less like that these days since fireworks are pretty dangerous, I’m not sure you can even buy them privately any more, and Halloween has become a thing just a week earlier (American import). But every schoolchild can, I hope, tell you the Remember, remember, the 5 th of November rhyme. Marmite : It is a yeast extract that looks like tar. I kid you not. It tastes either wonderful or not, depending on whether you started to eat it as a toddler or not. Very salty, looks disgusting, but I love it. The Australian equivalent is called Vegamite, but it isn’t as good, IMHO. The second part of the question is semi-important. No-one who has ever tried Marmite for the first time as an adult has ever liked it (I exaggerate probably, but not by much). In England, the classic way for a child to eat a boiled egg for breakfast is toast with butter and Marmite, cut into thin strips (called “soldiers”) that you dip in the soft yolk. Then you use your spoon to take out the hard white. My mouth is watering. The Fourth Question On to the fourth question for anyone who is so bored on Thanksgiving that they have read this far. What would be the equivalent questions in the US that anyone brought up here would have no problem answering but that people who have lived here for only a couple of years but think they have “gone native” would struggle to answer? The best “equivalent” questions I could come up with, just going with the same basic format, are: What do the letters RBI stand for? What is root beer and do you like it? No idea for a date question. Bueler? Bueler? The first one is pretty equivalent, a somewhat obscure reference from a sport that is so mainstream that even people who don’t follow it are vaguely aware of it. Like cricket. I think everyone knows what root beer is but to me, and most people not brought up on it, it tastes disgusting. Too sweet and too bitter at the same time. Barman, bring me a negroni. Oh wait…isn’t that too sweet and too bitter, too? I can’t think of a date though, that Americans know what it is but someone who immigrated (and that includes me so maybe why I can’t think of one) would not. However, maybe there are better questions than just trying to pick American equivalents of the British ones. Good hunting ground would be the stuff that American schoolkids are forced to read. I have read To Kill a Mockingbird (voluntarily) but I have no idea about Why the Caged Bird Sings or what is special about The Color Purple . So if you are really reading this on Thanksgiving, then Happy Thanksgiving. More likely it is afterwards, in which case I hope you had a wonderful time with your friends and family. My daughter’s boyfriend is a Michelin-starred chef, I think our food should be more than OK. My daughter was a bar-manager and sommelier and now works in the spirits industry. I think the wine will be fine, too. Trivia fact of the day. Mailboxes in UK display the initials of the monarch when they were created. It is, after all, the Royal Mail. So in England anything except the most ancient have E II R. She’s been the monarch for longer than anyone. But in Scotland they have just E R, because Elizabeth I (of England) never ruled Scotland. So just like James VI of Scotland was James I of England, Elizabeth II of England is Elizabeth I of Scotland. Go have another glass of wine and talk to your black-sheep uncle, it’s Thanksgiving. Second trivia fact of the day. I mentioned a chain, 22 yards, the length of a cricket pitch. A furlong (still used in the UK in horse racing and maybe other places) is 1/8 of a mile, 220 yards. An acre is a chain by a furlong, 4480 square yards, the amount a horse could supposedly plough in a day.
↧
TSMC 3D. Red and Green Glasses Not Required
I have been taking a look at TSMC's 3D packaging technologies. From numerous presentations at OIP and the Technology Symposiums, I knew that they had two. CoWoS and InFO and I knew...well, that's about it, to be honest. CoWoS (and CoWoS-XL, with larger interposers) is the older technology, first in production in 2012. It is based on a silicon interposer, typically built in 65nm or a similar non-leading-edge process. The first and probably most well-known product to use this technology is the Xilinx Ultrascale 3D FPGAs. The first generation of these used four rectangular dies to make up a large square. The second generation was similar but had a 5th die built in a different technology that contained all the high-speed SerDes I/Os. I suspect that for Xilinx, this design was as much a proof-of-concept as anything else, using a part that had such a high retail price (I've heard thousands of dollars) that they didn't really have to care too much about the cost. The other highly publicized design was a module for HiSilicon (Huawei) with two 16nm network processor die and one 28nm die. Back in DAC 2012, Cadence announced jointly with TSMC a complete design flow for CoWoS designs. The good thing about CoWoS is that it is very high performance, you can mix dies from different processes, and the dies can be huge. It is now a mature approach that yields well. The bad thing is that it is not cheap. So it is targeted at high-performance parts for FPGA, HPC, GPU, and networking where the benefits of high performance and large dies outweigh the costs. CoWoS stands for chip on wafer on substrate. There are really four steps to its manufacture: Create the silicon interposer wafer but do not saw it up The individual die are bonded using flip-chip technology to the interposer The backside of the interposer is thinned to reveal the through-silicon-vias (TSV) The wafer is sawn into individual parts and packaged, with the balls in the package connecting to the TSVs on the back of the interposer. The newer technology, which will enter volume production next year, is called InFO (which stands for integrated fan-out). This is targeted at mobile and thus is at a consumer price point. For the time being, the focus is on adding DRAM to a logic die. InFO's specs are much more modest than CoWoS, as you can see in the comparison chart above. The big difference, the "how do they do that" feature, is that there is no interposer. InFO itself has molding and metal between the logic die and the package I/Os. Note that there isn't a separate package, the metal and the molding compound is the package. How they make this all work is TSMC's secret sauce. The metal pitch is 5um and the lack of substrate doesn't just keep the price down, it keeps the thickness down, which is another care-about since we all like thin phones. The first application of InFO is simply to use the routing and mold compound to make a sort of just-in-time package for a single die. For some applications, once this gets to volume, this approach will presumably be cheaper than a conventional package. The big focus, though, is InFO-POP, which also has a DRAM die connected by what TSMC calls TIV for through-InFO-Via. Since smartphone application processors currently have memory in the same package (using wire-bond technology), I wouldn't be at all surprised to see this in phones next year. In the future there will be Multi-Chip InFO in which multiple dies can be put side by side (more like CoWoS, but lower performance and lower cost). TSMC call this InFO_S. As I said above, InFO should be in volume production sometime in 2016, but they have test vehicles. The picture below is a sawed cross-section of an InFO die on a PCB. For years there have been predictions that "next year" will be the year that 3D ICs truly arrive in volume, meaning in millions. The paradox of 3D has been that the cost has been too high to use for high-volume manufacturing, but without high-volume products driving the learning, the costs will never come down. CoWoS is used in lower volume high-performance markets. The high-volume consumer markets require something that is lower cost and don't need the bleeding-edge performance. There are about 1.5B smartphones shipped per year, so if this technology is used in even just a few models, it will drive the maturity and yield ramp, the costs will come down, and 3D will almost instantly (well, after a decade of work) be a reality. In the design space, the challenge is that InFO straddles Cadence's Allegro PCB tools and its silicon design tools. Today these worlds have largely been separate apart from a little bit of analysis. There is a disjoint design environment built on two different databases. The physical and electrical signoff is incomplete. So going forward, this packaging technology is going to pull these two worlds together: Interoperable design database SoC/InFO co-design Integrated physical and electrical signoff But lots has already been accomplished. For InFO PoP (DRAM on logic), the flow is ready with Allegro+PVS providing in-design DRC, electrical signoff for EMI, co-simulation, IR drop, and ESD. For InFO_S (side by side die), the flow should be ready by the end of the year. If you are truly involved in 3D then you should plan to attend the 3DASIP conference in Redwood City on December 15-17. Cadence's Bill Acito and Brandon Wang are among the speakers in the design tutorial. Doug Yu of TSMC is presenting Simplified High-Performance Integration Technology which I'm guessing is going to be about InFO.
↧
Q&A: Love of Math Adds Up to Passion for Formal Verification
Exhaustive and comparatively efficient, formal verification is a powerful option to other verification methods because it can detect bugs much earlier in the design cycle. Anyone with a love for math, like John Havlicek, can understand the unique advantages of this methodology. Havlicek, who has a B.S. degree and a Ph.D. in mathematics, is a principal product validation engineer at Cadence who spends his days working on simulation-based and formal verification of DDR memory controller and PHY IP. He has played an essential role in popularizing formal verification, and is a co-author of SVA: The Power of Assertions in SystemVerilog . I talked with him recently about what led him on the formal path. Listen in. How did you develop your interest in formal verification? I started out trying to be a mathematician; however, it’s really tough to be an academic mathematician. Mathematics is an intellectual art. It’s like being a starving artist among many great artists. I became attracted to formal methods while pursuing doctoral studies in computer sciences at the University of Texas at Austin. Formal methods are very mathematical – it was one aspect of computer science that really resonated with me. I learned about model checking and temporal logic for defining properties. Then I did an internship at Motorola, working in an internal tools group where we had an internal model checker, on-the-fly constraint solver, and assertion language. As the strategy changed from internal tools to purchasing EDA tools, my role shifted to advocating our assertion language for industry standards, working on methodologies, and assisting with deployment and liaison with vendors. I joined Cadence in May 2012. We’re now using for our IP many of the assertion features that I helped to develop as industry standards. What are the challenges of using formal analysis for DDR IP? And the advantages? DDR IP is highly configurable. Unlike chip projects that have a life span—they tape out and you’re done—the IP is continuously being delivered to customers with various selections of features. The code base is managed with scripting layers—that’s a big challenge because the scripting layer is not anything formal analysis can consume. We can’t actually do formal analysis at the level of the code base of the IP. That’s a huge coverage problem. What I’ve done is to analyze by hand the dependencies and actually find representatives of the distinct versions of the code for the block I’m looking at. There may be hundreds of delivery configurations, but my analysis based on the template variables may say there are half a dozen distinct versions of the block, so I run formal on these. Formal analysis has the ability to stress behaviors of critical blocks much better than simulation can, especially when your simulations are run with larger models. We run simulations with the whole memory controller and PHY IP hooked up together, with various Denali memory models, and bus functional models. It’s a memory subsystem. If you’re trying to exercise corner cases in an arbiter, that’s a heavy simulation platform to try to do that. Formal extracts the block of interest and it can pound that block intensively. You chaired the SystemVerilog Assertions Committee several years ago. What was that experience like? After a few years of working in the SystemVerilog Assertions Committee, I developed a reputation for having a transparent agenda. I prioritized technical merit, listened to criticism, built consensus, and was very successful in getting proposals passed. That led to my election as chair. Chairing comes with the responsibility to manage meetings, track assignments, call votes, and report to the board. I served in that position for several years before passing on the torch. What kinds of hurdles do engineers have to overcome in order to accept formal analysis? How did you help increase the popularity of formal analysis? One of the biggest hurdles is in learning a new perspective in the way you think about your design and verification. With the old perspective, I’d put something together, run some vectors through it. I’d think about interference cases and run those. When you’re doing formal, you’re not thinking about driving particular vectors through. You need to think about all of the cases that need to be considered. Anything you do not forbid will be exercised. Another hurdle is assertion literacy. You have to write down the expected correct behavior in a way that can be understood by the formal tool—in synthesizable RTL and assertions, not in an object-oriented programming language. We wrote the SVA book to help with all these challenges. If you were not an engineer, what would you be? A mathematical scientist. I do thrive on having a sense that what I’m doing has a positive impact on other people. There is a lot of good that comes from giving something away (such as open standards). Working on SystemVerilog and SystemVerilog Assertions and succeeding in making it a good language that many engineers use is a positive reflection on the value of what I’ve been spending my time on. When you're not verifying DDR IP, how do you like to spend your free time? I love outdoor activities. Hiking and backpacking are longtime favorites, but recently I've been trying some new stuff, like skydiving, scuba diving, and open-water swimming (In the photo with this post, John is pictured at San Francisco's Aquatic Park, ready for a swim.) Christine Young Additional Formal Verification Posts Tracking Progress on Formal Testbenches at ARM Austin Imagination Dispels 10 Myths About Formal Verification Broadcom Design and Verification Engineer: My First 100 Days in Formal Land
↧
Take Tighter Control Over Your Shape Degassing Patterns with Cadence 16.6 Allegro Package Designer and SiP Layout
With metal density and balancing requirements getting stricter with every year that passes, how you perforate the plane shapes of your designs needs to adapt. Whether it is a new hole shape that allows for a more consistent pattern fill across the layer, multiple passes with progressively smaller holes to achieve that optimal balancing of metal, or perhaps a complex interplay of holes on neighboring layers in your cross section, the Cadence IC package layout tools continue to evolve to make sure your needs are covered. To learn more about the new degassing options available with the most recent 16.6 software ISR releases, and to understand what is still to come in the 17.2 release that will soon be available, read on! New Degassing Hole Shapes Typically, a degassing pattern uses regular holes that are either square or octagon in shape. These might be rotated for more of a diamond pattern look, but they are the most common holes in use today. There are other options available to you; however: A circular degassing hole will result in a pattern with cut outs of similar size and shape to the voids around your circular via and pin pads on the same layer, allowing for consistency of pattern A rectangular hole will allow for more directional control of the pattern, with the holes facing in the north-south or east-west direction (or even at a 45 pattern to the corners of the substrate) to feed escaping gases along the desired paths Oblong holes, on the other hand, have the same benefits as rectangular voids do – but without any sharp, 90-degree angles in the outline of the holes. And, with the rounded corners, they can get just that small amount closer to surrounding geometries. Hexagon holes, the newest addition, can be placed in a staggered like pattern, allowing for straight paths of metal between the holes, optimal paths for current flow in power nets So, before you go straight to your standard hole setup, ask yourself this question: does one of the alternative hole shapes offer advantages over the shape I am using today? Inter-Layer Degassing Hole Array Interaction When laying out your degassing patterns, you likely want to make sure that they don’t overlap with holes on the layers above and below. Not just for manufacturing reasons, this time, but also for signal integrity and interference. How do you best accomplish this? There are a few options, depending on your desired interplay of holes: Partial overlap – Look at using a custom origin point for your degassing pattern, rather than one of the corners of the shape. Using the same origin point for all the shapes in your design allows you to set an offset from that location that is unique for each layer. Stagger the offset by half a void pitch on each layer for optimal offsets. No overlap – Set the adjacent layer void clearance to no overlap, and the tool will automatically suppress any degassing holes which would overlap a hole in a shape on the layer above or below. When using this setting, we recommend starting from one layer in the design and working either up or down to establish your degassing patterns based on the specific layer-layer void clearance rules you want. Void to shape overlap – If you only want to create degassing holes when they will be fully inside a shape on an adjacent metal layer, you can accomplish this as well. Set the adjacent layer void clearance to the inside shape option. With this, holes are suppressed if they are not inside a shape on the layer above/below, meaning that degassing holes will never overlap high speed traces or vias, for instance. Multiple Degassing Hole Passes in a Single Shape So, you’ve degassed your shape. But, your metal density is still too high for your needs. What do you do? Why, you run additional passes with smaller degassing holes! These will fit smaller holes into areas where previous passes could not squeeze in a hole meeting your clearance rules. Referring back to the picture in the last section, you’ll see the “Clear existing voids” option near the top of the form. Turn this off, and the configuration on the form will be applied without first clearing the existing degassing holes. To see this option on the form in your tool, make sure you are on the latest 16.6 ISR release, and that you have turned on the “degas_multi_pass_beta” option enabled in your user preferences. This is a standard feature in 17.2, but for customers needing to access this right away for their substrates, we’ve given you early access in 16.6. Give us your feedback if you choose to try it out! Below is a simple example showing a two-pass degassing pattern. Because the metal density was still slightly too high with just the large degassing holes, the second pass smaller holes were added, bringing the layer density into spec. A note, however. Only the first, or primary, degassing array settings are stored on the shape. These form the core degassing holes, with additional passes being performed to meet strict metal balancing requirements. When to Refresh Your Degassing Holes While you can degas your shapes at any point during your design flow, when you do it, and when you update the shapes, can save you significant time. Degassing a shape can add hundreds to thousands of holes to the shape. This means an increase in the database size, more complex shape outlines to consider when doing DRC updates, and a need to refresh the pattern when you make changes such as adding via connections to the shape or voiding around another net as you route through the shape. Consider degassing your shapes as late in the flow as possible. This will maximize the database performance. Before you start doing metal balancing and density analysis, or before you begin SI and PI characterization, is an ideal time. And, when you DO make changes to a shape, or just before you go to manufacturing, always make sure to update the degassing holes on all your shapes. A button on the degassing form will do this for you with a single press, remembering the custom settings for each of the shapes you have degassed. Still to Come… There are more exciting improvements on their way in the 17.2 release. A new “Advanced WLP” option for SiP Layout provides tools specifically aimed at these substrates, with their very thin metal layers and strict metal density and balancing requirements. A metal density scan tool will allow you to see areas of the design where there is too much, or too little, coverage. From here, you can decide whether you need to more aggressively degas large metal areas or, perhaps, you need to add metal with thieving patterns, cline fills, or others strategies. If you have a particular go-to solution for bringing your metal density into spec when it is too high or too low, let us know – we might be able to automate that for you in an upcoming release of the tool! Do you have other ideas? Talk to your Cadence customer support representative or get in touch with us by commenting on this blog or contacting the authors. We’d love to hear from you on this topic or, indeed, on any challenges you face when designing your package substrates! Bill Acito Jr.
↧
↧
Virtuoso: Advance to 10nm, If You Pass Go Collect $200
There are two major discontinuities in the last couple of process nodes—FinFETs and multiple patterning—which have changed a lot of the rules for custom design (which doesn't just mean analog, but also standard-cell design and other digital IP). The digital designer is largely insulated from these. He or she never looks inside the standard cells and so never sees a transistor. It matters little to them if the transistor is planar or FinFET. The double patterning is largely taken care of by the router, which has to be aware of the coloring for routes, often for contacts/vias, and perhaps for cut masks and other artifacts that don't really even appear explicitly in the layout. Double patterning is often referred to in terms of colors (red and green for double patterning) and the process of assigning a piece of layout (whether automatically or manually) to a mask is called coloring. This actually comes from mathematics since this type of problem is known as graph-coloring and so automatically assigning polygons to nets is a coloring problem (as are many other processes such as register assignment in a software compiler). For the custom designer, these two changes make a huge difference: Transistor width and length cannot be varied—the choice is how many transistors and only a whole number of transistors is allowed Since FinFETs have the gate wrapped around the fin, and have high drive current, electrical and thermal effects can't just be left until signoff any more The design rules are extremely complex with dummy structures and a highly constrained layout style, layout dependent effects, voltage dependent design rules, and more For critical nets, such as differential pairs, manual coloring is allowed, but the foundries only allow a certain percentage (around 20%) of nets to be manually colored since badly colored designs can impact yield Cadence released Virtuoso Advanced Node in 2013 to provide full support for designs in the 14-22nm range. The new release announced today provides full support for 10nm with some support for 7nm features, too. Cadence has been working with advanced customers and foundries on 10nm for some time, so the new features have already been matured on some of the most demanding designs. All the first shuttle customers at 10nm use Virtuoso. At 10nm, there are a lot more changes. Everything gets more complex. There is a structured row-based methodology that is needed so place and route can work. EM constraints can be severe with skinny wires being driven with high current. The design rules are even more impossible for a layout designer to completely comprehend, with an increase in layout dependent effects (LDE) and density gradient effects (DGE). So what are the new features? Jeremiah Cessna went over it with me in an information dump last week. Drinking and firehoses come to mind. These new features are not necessarily limited to 10nm, when appropriate they are available in 14/16nm (and probably earlier processes if you insist). Multi-Patterning and Color-Aware Layout The first big change is that at 10nm is that the foundries require the design to be fully colored before it is taped out. This can require next-generation double patterning, not just LELE (litho-etch-litho-etch), but also SADP (self-aligned double-patterning), via cut double patterning, triple, quadruple, and even quintuple patterning. Support includes ways to propagate coloring (so every polygon does not need to be colored), a correct-by-construction SADP flow, ability to lock colors and so on. Designs can be precolored in the schematic to make sure that critical nets, such as differential pairs, can be assigned to the same mask. An alternative approach that can be used is to create a color grid. This approach is especially important when there are wide wires since the spacing of wires depends on the width and so a common pattern is to group all the wide wires in the middle, with the narrow wires outside. The color grid reflects this, so it is not an even grid. Objects inherit the colors of tracks used to create and edit the wires. Module Generator Device Array Flow The traditional custom flow, still used for older nodes, is that the designer would create the schematic and use simulation to adjust it until it seemed good. It would then be shipped over the wall (or the ocean) to the layout designer. Once the layout was done, which might take a week, the parasitics would be extracted and then the design could be resimulated and the process could iterate if required until it converged. This method got less effective with every node, but by 10nm, drawing a transistor netlist and simulating it without layout is a waste of time since it is too inaccurate to give any useful feedback. That raises the issue of what the flow should be since it is very expensive to do layout just to find out when it is simulated that the wrong approach was taken and a new schematic and a new layout are required. Over the years there have been many attempts to automate custom layout (Jeremiah Cessna was at NeoLinear for example, acquired by Cadence in 2004) and all failed to truly impact the methodology. The reason was that there is too much flexibility, and inputting enough constraints to deal with that was more work that doing the layout manually. So suddenly the highly restricted design style that FinFETs and lithography imposes make higher levels of automation possible. Using ModGens, it is possible to create large multi-transistor structures complete with well-estimated parasitics. A good analogy is that these are super-Pcells that generate higher level objects such as differential pairs, cascodes, stacks of series devices, HiR resistors with current mirrors, varactors, decap, and more to come. Of course customers can build their own generators too. Using ModGen-based flows (as opposed to the old schematic-layout loop) greatly reduces the number of design iterations and can improve designer productivity by up to 25X. Electrically Aware Design The electrical rules are increasingly complex and it is also increasingly easy to violate them. With electrically aware design (EAD), the rules are checked in real time as the layout is being done. For example, if a line gets too long and violates an EM current rule then it switches to red. If R & C matching is specified, then this is checked in real time. Normally, ERC checks require a design to be LVS clean, but during layout this obviously cannot be guaranteed. The electrically aware rule checking works on non-LVS-clean designs. 10nm Custom Routing For 10nm, custom routing supports the new design rules, and minimizes coloring errors that can otherwise be widespread at 10nm. In-Design Physical Verification In-design physical verification (iPVS) basically does what it says on the can. It enables layout engineers to instantaneously detect and fix errors as designs are being implemented as opposed to waiting for a large amount of design to be done before running a DRC check. This feature alone seems to improve designer productivity by 15%.
↧
Will USB Type-C Connector Replace the 3.5mm Audio Jack?
In the past few days, there have been many posts on the Internet around Apple planning to remove the 3.5mm audio jack support from the upcoming iPhone 7 to create the slimmest iPhone in history. Given their multiple attempts in the past, it’s perfectly understandable, and for sure the existing Lightning connector is capable of providing this functionality. From users’ perspective, however, it obviously raises a lot of controversy as the current audio jack is most probably the most popular analog connection in the world, and removing it would cause lots of compatibility issues, also with very expensive headphones and headsets people currently use with their iDevices. Now, if we take a broader view of what’s going on in the connector standardization space, it’s impossible to ignore the big revolution going on in USB right now, with the Type-C connector taking market by storm. It’s officially called the fastest adopting USB specification to date and USB itself is the most popular serial interface there is, already shipping billions of devices a year. USB Type-C is being applied not only to USB data transfer as the legacy connectors did, but also to provide power (through the Power Delivery 2.0 specification) and high-resolution display (in DisplayPort and MHL Alternate modes). People in the know are aware of the part of the USB Type-C specification that talks about the Type-C Audio accessory mode. So far, both USB-IF and everyone else have stayed silent on the topic, for various reasons, but maybe this is how the dots should actually be connected? Wouldn’t it make more sense for Apple, and other vendors, too, to standardize audio over USB Type-C and provide better user experience (battery-less active noise cancelling comes to mind immediately)? Compared to the Lightning plug, the Type-C connector is more bulky, so by going this route Apple would not be able to top the current iPhone 6S slimness, but is it really that important to the end user? Who would not want the iPhone to stay at the current thickness, but add faster charging, and much faster USB transfer speeds (Lightning is USB 2.0 only)? I like the idea of the USB Type-C connector a lot, and its implementation in future iPhones would pave way for true widespread adoption of the standard in consumer and automotive markets.
↧
Whiteboard Wednesdays—DUT Verification with Cadence VIP
↧
Why Do Layout Designers Say "Stream Out"?
For the same reason we "hang up" our phones. When a layout designer saves a design, they often say "stream out" whereas in most software, such as Word or Powerpoint, this is usually simply called saving the file. As an aside, it is funny that the icon for saving a file is a floppy disk which anyone aged under about 20 has probably never seen. Plus look at your phone, at the icon to make a call. When did a phone last look like that? It is not just layout designers who say "stream out". As I said yesterday in the blog about Virtuoso Advance Node, I met with Jeremiah Cessna and he says it all the time too. In the same way, the "stream out" terminology is archaic and comes from a long time ago. Way back before Cadence existed, before even SDA and ECAD, the ingredients of Cadence, existed, EDA consisted of layout and simulation. The biggest of the layout companies was Calma who produced a system called the Graphical Design System. The hardware was a re-badged Data General minicomputer (speaking of which, if you have not read it, read Tracy Kidder's The Soul of a New Machine about developing the successor). The system was called GDSII (pronounced G-D-S-two), was introduced in 1978, and became widespread. An earlier system just called GDS dated to 1971; I think it was mainframe-based but even Wikipedia has a blank placeholder for it and before my time so I cannot add any color. The system had a disk drive but the general mode of use was to keep designs on magnetic tape. The layout designer would load her (they were mostly women) design from the magnetic tape onto the disk, do their work, and then save the design back to tape. The format that the design was stored in was known as GDSII stream format, and so saving the design back to tape was called "stream out", probably the menu wording, too. The primary reason for doing things this way was that these systems pre-dated cheap local area networks. In a large semiconductor company, there would be several GDSII systems, but the layout designers did not want to have to wait for a particular one to be available, as would be the case if they left the data on the disk. Although in that era, the disk packs were exchangeable but the packs themselves were too expensive to use in that way. You have probably heard of GDSII stream format, because it became the dominant way that layout information was moved between EDA design tools and is still widely used today almost 40 years later. It became the industry standard but in fact it is not officially an open standard, but a proprietary one. It was developed by Calma and then moved to GE (when they acquired the company that had acquired Calma) then to Valid (when they acquired the Calma division from GE) and then to Cadence when they acquired Valid in 1991. In the early days, Calma really did consider it proprietary. One of my first jobs when I arrived in the US in 1982 was to write a program to read it. At VLSI Technology, we had been given a microcontroller design in GDSII format, and although we had access to a GDSII system (so we could load it up), that didn't give us a way to get the design into our own design environment that used CIF (Caltech Intermediate Format) for layout. The format is binary so simply looking at what was on the tape didn't help me much. However, eventually we discovered that there was a file on each GDSII system that described the format (it was just two pages long, complete with plenty of copyright warnings). Once we got someone to print it out it only took about a day to have the microcontroller in VLSItools. To this day I'm not sure of the legality of what we did (but I work for Cadence and we own the standard so I think I'm safe now!). The GDSII stream format and the term "stream out" have survived over the years even though the original computers and magnetic tape in general is obsolete. Only recently has the Oasis (Open Artwork Interchange Standard) standard started to get traction. We have other terms that live on, such as "hanging up" the phone that I mentioned at the start. It is a long time since anyone actually hung the earpiece on a hook to disconnect the call. In the semiconductor world, we have another term like this, "tapeout". Back in the day of Calma systems, when a design needed to be transferred into manufacturing, a special program then needed to be run to fracture the design into rectangles so that a specialized pattern generator machine (PG) could make the mask. The design would be transferred after fracturing on a tape. So taping the design out really did mean writing a tape, the PG Tape, and getting it to the mask shop (probably by courier, as this was pre FedEx). Later, masks were made by e-beam which was a raster scan instead of a list of fractured polygons, in a format called MEBES (manufacturing electron beam exposure system). But it was till a tape in the early days. Then, eventually, MEBES files were transferred electronically. But even though it is decades since anyone truly released a design to manufacturing with a tape, we still say "tapeout". Old standards live for a long time. Imagine the guy (I know, but look at the date, it was a man for sure) in 1956 who designed the cigar lighter in automobiles. He would be amazed that there is at least one in every car 60 years later, and even some planes. But he would be even more surprised that computers thousands of times more powerful than the ones he'd maybe seen on TV, that filled large rooms, would fit in your pocket...but still use his design to recharge the battery. It is actually a horrible standard for that though, since the plug has a pin in the middle that is trying to push the plug out of the outlet and it is only held in by friction. But my favorite old standard is this: why do we get on planes from the left hand side? The reason is that the first planes were sea-planes, so of course you got on the left side. Because that is the side you get onto ships, the "port" side that faced the quay when the ship was in port and where the gangplank would go. The other side, the right side, is called the "starboard" side. That is a corruption of "steer board", the steering oar that was used before the invention of the rudder. This was put on the right side since that is most natural for a right-handed steersman. The port side was put against the quay so that the ship could be maneuvered into position without the steering oar being blocked. So 2000 years after the invention of the stern-post rudder, we get onto planes from the left so that the steering oar isn't obstructed by the airline terminal. Now that's an old standard.
↧
↧
What's Good About PCB Allegro Rules Developer and Checker? 16.6 Has It!
You can now leverage the 16.6-2015 release Allegro Rules Developer and Checker . The Allegro Rules Developer and Checker allows you to develop custom fabrication and assembly rules to extend capabilities provided by Allegro PCB Designer and the Manufacturing Option. This tool provides a relational geometric verification language designed specifically for creating rules that are proprietary and custom to an original equipment manufacturer (OEM). The rules can be viewed and executed from the Allegro Constraint Manager, making it a single source for all design rules checks (DRCs) within a PCB. There are two excellent videos that describe the details: Allegro Relational Rules Checker: Running RAVEL Rules from Constraint Manager Allegro Relational Rules Checker: How to Run DFM RAVEL Rules Through GUI Please share your experiences using this technology. Jerry “GenPart” Grzenia
↧
Cadence Innovus Implementation System is Available to Academia
To support academia using the latest industry-standard tools, Innovus™ Implementation System has been made available to universities. If you want to use Innovus Implementation System, please contact the Cadence university partner in your region or write an email to academicnetwork@cadence.com . Innovus platform-based Rapid Adaption Kits and iLS courses are also available through Cadence university partners . The new Cadence® Innovus Implementation System meets designers’ needs by delivering a typical 10% to 20% PPA advantage along with an up to 10X TAT and capacity gain. Providing the industry’s first massively parallel solution, the system can effectively handle blocks as large as 10 million instances or more. Massively parallel architectures can handle huge designs and take advantage of multi-threading on multi-core workstations, as well as distributed processing over networks of computers Its new GigaPlace solver-based placement technology is slack-driven and topology-, pin access-, and color-aware to provide optimal pipeline placement, wirelength, utilization, and PPA An advanced, multi-threaded, layer-aware timing- and power-driven optimization engine reduces dynamic and leakage power A unique concurrent clock and datapath optimization engine enhances cross-corner variability and boosts performance with reduced power Next-generation slack-driven routing with track-aware timing optimization addresses signal integrity early on and improves post-route correlation Full-flow multi-objective technology makes concurrent electrical and physical optimization possible Since multiple production-proven signoff engines are integrated into the Innovus Implementation System, it was essential to have a simplified user and scripts interface. The system fosters usability by simplifying command naming and aligning common implementation methods across other Cadence digital and signoff tools. The processes of design initialization, database access, command consistency, and metric collection have all been streamlined and simplified. In addition, updated and shared methods have been added to run, define, and deploy reference flows. These updated interfaces and reference flows increase productivity by delivering a familiar interface across core implementation and signoff products. Learn more about Innovus Implementation System here .
↧
Cadence Academic Network - The Next Generation
“University students around the world are using Cadence technology to learn and develop their talents. The future of EDA is bright… and very friendly!” – Patrick Haspel, Senior Principal Program Manager Innovation starts with our people—and today’s students are the next generation of technology innovators. Dr. Patrick Haspel, senior principal program manager, currently runs one of Cadence’s most interesting and innovative programs, the Cadence® Academic Network. Through the Cadence Academic Network, we provide future engineers with access to our cutting-edge software as we collaborate with instructors to develop a curriculum that applies design theories to real-world engineering challenges. Cadence donates software worth over a million dollars each year to students at close to 1,000 universities around the world. Cadence is committed to creating a sustainable pool of EDA engineering talent by preparing students with the tools and training they need to thrive in the competitive semiconductor and electronics industry. Students who use Cadence technology during their time at universities get first-hand experience with our EDA tools and often look to join Cadence after graduation or encourage their future employers to become Cadence customers. Last year, 12% of our new hires were recent college graduates and we had another 100+ interns around the world. Our robust community of interns and recent graduates encourages our university talent to grow their skills and make a difference at Cadence. Every year, we hold an annual Intern Showcase event where interns share their projects with Cadence Fellows, Distinguished Engineers, executives, and employees. We celebrate their contributions with fun parties and mixers, as well as expose them to the One Cadence—One Team culture. Fotios Ntampitzias, a recent college graduate on our DSG team, was asked to describe his experience at Cadence. He said, “There is no typical day at Cadence, mainly because every day there is a different challenge! Each design is different, each customer is different, which creates a very diverse workload. Plus, I can really feel proud of my achievements every day I go back home, as I know that I am actively helping people lead better lives thanks to our work.” Pedro Coke, an intern at Cadence prior to being hired full-time at our Munich location, was first exposed to Cadence EDA tools at the University of Porto in Portugal, where he was a student. Now, after one year at Cadence, he says, “I’m still learning something new every day.” Patrick’s idea for the Cadence Academic Network came while he was a PhD student and noticed that the technical industry was not capitalizing on the opportunity to work with top universities. This was the start of a unique relationship between Cadence and university professors that has expanded worldwide under Patrick’s leadership. The culture at Cadence has enabled Patrick to take his dream and turn it into a reality. Employees at all levels and in all areas of the company are encouraged to innovate, take initiative, collaborate, and make a difference. Thanks to Patrick, the Cadence Academic Network has become a huge and global success, helping students around the world expand their skills and prepare to drive the future of innovation. Anton Klotz
↧
Front-end Design Summit
Wednesday was the annual Front-end Design Summit at Cadence headquarters. This focuses on the digital front-end design tools, which means synthesis, test, and power. Almost any semiconductor seminar has power as one of the main themes. Five or ten years ago everything was about timing but now power signoff is the bigger issue, silicon is so fast and implementation tools so good that power and area are rarely the gating issue to tapeout. Not to say we wouldn't all like more performance and smaller area, but more performance mostly only comes with more power anyway. Paul Cunningham, Cadence's VP of R&D for front-end design (the second English Paul who studied computer science at Cambridge University, me being the first) gave an overview of the whole product line and how it has evolved over the last couple of years. Cadence has transformed their tools with common engines and an architecture that scales to use large numbers of cores to cope with huge designs. These are the tools that end with the letters US (except for Joules, where somehow an LE slipped in between the U and the S). This combination means that very large designs can be distributed over lots of compute power and the common engines ensure very high accuracy. Genus Synthesis Solution is the new synthesis product announced just before DAC earlier this year. Joules RTL Power Solution is the new RTL power analysis product, which actually contains a sort of stripped down version of the Genus solution under the hood. It is not possible to estimate power accurately at the RTL level without good estimates of interconnect, clock distribution, and more. If there was a theme of the day it was definitely power. Later in the day, Steve Carlson of Cadence gave a summary of how all this new technology can be used to move power estimation earlier in the design process. Power, like several other aspects of design, suffers from the fact that early in the design cycle when architectural changes are possible, there is limited accuracy, whereas late in the design cycle during signoff, where there is as much accuracy as you want, there is very little that can be done. The number is what it is. RTL seems to be a sweet spot as it is, largely, for verification. Digital design is largely about getting the RTL right and then using tools that are largely automatic to reduce that to actual silicon structures.I especially liked his cautionary tale about a company that designed an eight-core version of their product after seeing speedups with two and four cores. But the thermal issues were not correctly handled and so it ran barely faster than the single-core version. That's an expensive mistake. Getting system power right early in the design cycle is really, really important. With the risk of thermal runaway, chips can't ignore these issues until signoff. Power affects thermal which affects power, which affects performance. Later still in the day, Cadence's Jay Roy went into much more detail about how the Joules solution works, how accurate it is, and how it can be used in a system context. At the end of the day there was a panel session on...power. Technically it was called "Closing the Power Gap for Wearable Devices Through Early System-level Design". I moderated it and the panel was Leah Clark of Broadcom, Fred Jen of Qualcomm, Anthony Hill of Texas Instruments, and Jay Roy of Cadence. We did talk somewhat about wearables but also touched on whether IoT devices are going to be largely analog rather than digital, which process nodes they might be in, whether anything interesting is happening in battery technology. Quite a bit on what processing should be done locally in the device, versus up in the cloud or in your phone. Security seems like a challenge since the encryption technologies we have available are anything but low power. If there was any conclusion it was that big A small d mixed-signal design is still really hard and stuff falls into the cracks between SPICE and static timing analysis, for example. If there was a second theme of the day it was manufacturing test. Test time is a huge challenge. Test has been transformed over the last decade as scan test and BIST have become ubiquitous and test compression technologies help to get the test time and test pin count down. Mun Sing Loh of Lattice Semiconductor talked about low pin count compression solutions (earlier in the day he talked about their first experiences using Genus Synthesis Solution at Lattice). Mike Vachon of Cadence gave an update on the latest test solution in Encounter Test. If you attended the summit (I think perhaps even if you registered and didn't manage to make it), then the presentations will all be posted soon and you will get an email with a link. Next week, on December 10th, is the Implementation Summit, focused around the Innovus Implementation System and the suite of US signoff tools. Information and registration is here . I'll see you there.
↧
↧
Cadence Innovus Implementation System is Available to Academia
To support academia using the latest industry-standard tools, Innovus™ Implementation System has been made available to universities. If you want to use Innovus Implementation System, please contact the Cadence university partner in your region or write an email to academicnetwork@cadence.com . Innovus platform-based Rapid Adaption Kits and iLS courses are also available through Cadence university partners . The new Cadence® Innovus Implementation System meets designers’ needs by delivering a typical 10% to 20% PPA advantage along with an up to 10X TAT and capacity gain. Providing the industry’s first massively parallel solution, the system can effectively handle blocks as large as 10 million instances or more. Massively parallel architectures can handle huge designs and take advantage of multi-threading on multi-core workstations, as well as distributed processing over networks of computers Its new GigaPlace solver-based placement technology is slack-driven and topology-, pin access-, and color-aware to provide optimal pipeline placement, wirelength, utilization, and PPA An advanced, multi-threaded, layer-aware timing- and power-driven optimization engine reduces dynamic and leakage power A unique concurrent clock and datapath optimization engine enhances cross-corner variability and boosts performance with reduced power Next-generation slack-driven routing with track-aware timing optimization addresses signal integrity early on and improves post-route correlation Full-flow multi-objective technology makes concurrent electrical and physical optimization possible Since multiple production-proven signoff engines are integrated into the Innovus Implementation System, it was essential to have a simplified user and scripts interface. The system fosters usability by simplifying command naming and aligning common implementation methods across other Cadence digital and signoff tools. The processes of design initialization, database access, command consistency, and metric collection have all been streamlined and simplified. In addition, updated and shared methods have been added to run, define, and deploy reference flows. These updated interfaces and reference flows increase productivity by delivering a familiar interface across core implementation and signoff products. Learn more about Innovus Implementation System here . Anton Klotz
↧
Digital Designers Discuss Ways to Close the Power Gap for Wearable Devices
How often do you have to charge your electronic devices? What is often an annoying problem for consumers is an even more vexing challenge for the engineers who design wearable devices. A panel of engineers explored ways to close the power gap for wearable devices during a discussion Wednesday, Dec. 2, at the Front-End Design Summit held at Cadence’s San Jose headquarters. On the panel were: Anthony Hill, a distinguished member of the technical staff at Texas Instruments Fred Jen, an engineering director at Qualcomm Leah Clark, a technical director at Broadcom Jay Roy, an R&D engineering director at Cadence Design Systems One of the themes that emerged from the discussion is that designers need to take a big-picture approach when they are examining power management techniques. Wearables are primarily analog devices, developed on older process nodes. As Hill noted, from a system design perspective, engineers have to think about the software but also packaging solutions and the older technology that they will integrate into the overall design. Jen pointed out that if engineers focus on more digital-oriented techniques, like clock gating, then they won’t be able to solve all of their power problems. Looking at the Big Picture Cost of techniques is another important consideration, noted Clark. As a digital designer, she said that she often looks at ways to save power on a block, such as by lowering voltage, using near-threshold computing, tapping into voltage scaling, or looking at power gating to shut off a block if it is not needed. But each of these techniques come with a cost. Often, you need to bring in a regulator and/or a resistor—these take up real estate on the chip. Near-threshold computing increases system-level complexity. “Look outside your block and work from the system level down to save power,” said Clark. “We can’t just be digital designers anymore, we have to think about all these things.” Hill pointed out that research is being done on new power-saving technologies, such as alternative chemistry batteries, where batteries are laid atop chips. Clark noted that engineers need to look at analog power-management tools and see how they can fit in the digital world, rather than the other way around, which results in inaccurate UPF or CPF files. Roy noted that from an electronic design automation (EDA) perspective, the projected growth of Internet of Things (IoT) devices (50 billion things connected to the Internet over the next 5 years) is good news for the industry. “From an EDA perspective, it’s very exciting that this is one area that is growing. EDA will certainly have (a role) to play,” he said. For an overall report of the Front-End Design Summit, read the post here from Paul McLellan, editor of the Breakfast Bytes blog and MC at the summit. Christine Young
↧
Xtensa Design Contest 2015 in India
The Cadence® Xtensa® Design Contest is an initiative of the Cadence India University Program and the first such initiative of this kind. In this contest, students are provided with a project problem statement along with a detailed outline and parameters, and also the success criteria on which solutions will be qualified and judged. The student team is expected to submit an abstract based on the given problem, and once approved, to submit a project report with a working model successfully achieving the set goals in problem statement. The second edition of the contest was held successfully this year with 16 approved submissions. The objective of the project was to implement an adaptive beamforming algorithm for a microphone array on an Xtensa processor. All the confirmed submissions were provided with Xtensa Software Developers Toolkit to work on. The winning and runner-up team members, including guide, received a 1TB external hard disk each as a prize along with certificates. Anton Klotz
↧