客户反馈是产品改善的关键 当您在理解Allegro 17.2-2016发行版的特点和优势时,我可以猜到您在想什么。“哦不,他们对我的工作环境做了什么破坏?”如果您已使用Allegro多年了,您就会理解我的意思。我们称之为零版本的发行版被用来建立数据库/改变架构,会导致某种程度上的升级中断。17.2有效结合了零版本和17.0 EAP(Early Access Program)版本内容,于是就有了17.2。多年前,当我们在考虑下一步需要开发什么时,我们首先就要从你们提交的客户变更请求(CCR)数据库中找答案。当我们在处理许多CCR的过程中,你们经常不喜欢我们立即回复 “Inactive” ,请不要认为这是 “忽略” 的意思,您们的CCR是我们规划产品的重要起点,“Padstack Overhaul”项目就是这种情况。在我们的数据库中,与焊盘相关的CCR数量最多,于是我们决定启动“Padstack Overhaul”这个项目。 项目从数据挖掘开始,我们评估了所有关于这个主题的100多份CCR,我们的应用团队同时开始开发新的用户界面。很遗憾地告诉您,与焊盘相关的旧代码将不再运行,这是我们发展产品过程中不得不做的牺牲。如果您决定不使用任何新功能,我们仍然保证您的16.6焊盘库与17.2兼容,但我认为您会喜欢我接下来分享的内容。 此次升级的关键主题是提高PCB设计效率和易用性。 您需求的: 更容易的新形状焊盘编辑 新的编辑器提供了许多新形状的焊盘,使复杂焊盘的设计变得更简单。有了这个版本,用户可以创建一些新形状的焊盘,例如甜甜圈形状、圆角矩形或倒角矩形。 (点击查看大图) 设计工程师也可以通过参数调整矩形角的类型,您可以创建墓碑形状的焊盘或仅是一个单角槽。这对于建库的好处在于可以立即实现,因为它们不再需要按形状画焊盘。 新的焊盘编辑器通过现代易用的图形界面创建焊盘,大大提高您的设计效率。向导似的工作方法可以轻松地定义焊盘及其所需属性。 您需求的: 内置禁止区 焊盘支持内置走线禁布区以及相邻层禁布区。对非金属化孔,可使用标准的禁布区形状,以及中间层可用的HDI过孔。也可以使用相邻层禁布区来挖空表面安装焊盘下的金属平面,以控制阻抗,但也可以用于机械埋/盲孔,以防止当钻头过冲时发生短路。对于相邻层禁布区,库管理员将生成几何图形,而PCB工程师可以应用属性来控制相邻图层的数量(最大为8)。 您要求的: 钻孔 大部分CAD系统都支持钻孔区域,我们所知道的是成品孔的尺寸,电镀后的尺寸。但之后,我们需要支持钻头尺寸和背钻尺寸。钻孔区域可用来指定您希望制造商使用的钻头。最可能使用的场景是为压接式连接器指定钻孔尺寸。 (点击查看大图) 背钻区域带动了背钻应用的主要软件功能升级,这是另一篇文章的主题了。使用这一区域可以指定钻头尺寸。这一信息被输出至NC Legend图表中。也可支持不常用的扩孔和锥孔结构。 您要求的: 复杂掩模方案 掩膜层可定义多种形状。多形状的掩膜方案必须被创建为.fsm文件,并被分配到掩膜焊盘层的定义中。以窗格掩膜方案为例,这种增强可以使之受益。 (点击查看大图) 您要求的: 增强的动态形状特性 我遇到的最常见的客户变更请求是关于这个增强特性的。因为每层都关联了动态图形,现在可以管理您的引脚/过孔的热焊盘/间距参数。与我们开发的创建约束区域的使用模型类似,您可以分级运用性能选项,包括外层、内部平面、内部信号和单层。常见的需求是关于控制元件引脚热焊盘触点的数量。 欢迎您的留言反馈! 您是否有可分享的经验? 您可以通过 PCB_marketing_China@cadence.com 联系我们,非常感谢您的关注以及宝贵意见。 相关视频:焊盘增强 https://youtu.be/jjjSXTBfvHI * 原创内容,转载请注明出处: https://community.cadence.com 相关阅读: 升级到Allegro17.2-2016的10大理由 Allegro最新技术 欢迎订阅“PCB、IC封装:设计与仿真分析”博客专栏, 或扫描二维码关注“CadencePCB和封装设计”微信公众号,更多精彩内容期待您的参与! 联系我们:spb_china@cadence.com
↧
升级到Allegro17.2-2016的10大理由之3:新的PAD编辑器——不只是一个新GUI
↧
Virtuoso IC6.1.7 ISR22 and ICADV12.3 ISR22 Now Available
The IC6.1.7 ISR22 and ICADV12.3 ISR22 production releases are now available for download at Cadence Downloads . IC6.1.7 ISR22 ICADV12.3 ISR22 For information on supported platforms, compatibility with other Cadence tools, and details of issues resolved in each release, see: IC6.1.7 ISR22 README ICADV12.3 ISR22 README The links above are functional at the time of publishing. If you encounter any links that are now obsolete, visit https://downloads.cadence.com and select the release name you are interested in to access the related files. Here is a listing of some of the important updates made to IC6.1.7 ISR and ICADV12.3 ISR over the last few releases: Faster Netlist Generation for Analog Components in a Mixed-Signal Design (from ISR22) Use the new Create spectre subckt for extracted view check box to generate an optimized netlist for analog components in a mixed-signal design. Not only does the new option improve the netlisting performance but it also skips the mixed-signal elaboration step for these cellviews, saving the overall simulator processing time. Default Application to Open a Saved ADE State (from ISR21) Specify the default application where you want to open a saved ADE state. Use these while migrating to ADE Explorer or ADE Assembler . Wildcard Syntax for Saving PCells in Netlist (from ISR21) Use wildcards to specify PCell operating point parameters on the Save Options form. (ICADV12.3 Only) Display of Packet Name for WSP Tracks in Virtuoso Width Spacing Patterns (from ISR19) Use custom display packets for WSP tracks. New Single Schematic Driven Simulation and Layout Flow in Virtuoso System Design Platform (from ISR19) Use a master schematic to drive both pre- and post-layout simulations and to create the package layout. For more details on these and all the other new and enhanced features introduced in this release, see: IC6.1.7 What's New ICADV12.3 What's New Contact Us Please send questions and feedback to virtuoso_rm@cadence.com . To receive Virtuoso release announcements like this one, and other Virtuoso-related information, directly in your mailbox, type your email ID in the Subscriptions field at the top of the page and click SUBSCRIBE NOW. Virtuoso Release Team
↧
↧
Virtuosity: Opening Old ADE States and Views with ADE Explorer and ADE Assembler
Have you found it a pain that when opening a Virtuoso ® ADE L state or a Virtuoso ® ADE XL view that the default application is still the old ADE L or XL? If you've moved on to Virtuoso ® ADE Assembler or Virtuoso ® ADE Explorer then you need to change the application in the Open File dialog to ADE Explorer or ADE Assembler . To help you set the application by default, we have added a couple of environment variables that you can set in your .cdsinit or .cdsenv files. To set the ADE L states to open using ADE Explorer by default, simply set this cdsenv: envSetVal("adexl.gui" "adestateDefaultApp" 'cyclic "ADE Explorer") and to set the ADE XL views to open using ADE Assembler by default, set this: envSetVal("adexl.gui" "adexlDefaultApp" 'cyclic "ADE Assembler") Of course 'Open with' is not the only way to migrate your old ADE L states or ADE XL views to ADE Explorer or ADE Assembler. You can: Create a new blank maestro view in ADE Assembler and choose File->Import and select one or more ADE XL or ADE Assembler cell views to import. Create a placeholder test in the Data View pane of ADE Assembler, and right-click and choose Load ADE L State . Choose Session-> Load State in ADE Explorer and select the ADE L state to convert it to a maestro view. Use the maeMigrateADEXLToMaestro and maeMigrateADELStateToMaestro SKILL functions. To migrate an ADE XL view to maestro you can use the maeMigrateADEXLToMaestro SKILL function to create a maestro view from the ADE XL view, and this can be opened in ADE Assembler. maeMigrateADEXLToMaestro(" " " " " " ?maestroLib " " ?maestroCell " " ?maestroView " ") The following example code migrates an ADE L state from a state file, AC_state1, saved in the ./libs/Two_Stage_Opamp/OpAmp/adexl/test_states/ directory using the maeMigrateADELStateToMaestro SKILL function: maeMigrateADELStateToMaestro("Two_Stage_Opamp" "OpAmp_AC_top" "AC_state1" ?maestroView "maestro1" ?migrateFrom 'directory ?statePath ./libs/Two_Stage_Opamp/OpAmp/adexl/test_states/") This creates a new maestro view called maestro1 under the library/cell Two_Stage_Opamp/OpAmp_AC_top . Related Resources maeMigrate SKILL Functions Migrating ADE L/XL Setup to ADE Assembler Migrating an ADE L Setup to ADE Explorer For more information on Cadence circuit design products and services, visit www.cadence.com . About Virtuosity Virtuosity has been our most viewed and admired blog series for a long time that has brought to fore some lesser known, yet very useful software and documentation improvements, and also shed light on some exciting new offerings in Virtuoso. We are now expanding the scope of this series by broadcasting the voice of different bloggers and experts, who would continue to preserve the legacy of Virtuosity, and try to give new dimensions to it by covering topics across the length and breadth of Virtuoso, and a lot more… Click Subscribe to visit the Subscription box at the top of the page in which you can submit your email address to receive notifications about our latest Virtuosity posts. Happy Reading! Arja
↧
Labor Day Off-Topic: Almost Everyone Has More Than the Average Number of Legs
It's Labor Day. Cadence is closed in the US. Unfortunately, I'm in India and it's not a holiday here. It's CDNLive India later in the week. As is traditional, I will post about...whatever I feel like. Everyone seems to like these off-topic posts. About the only thing they have in common is that they are not about anything to do with semiconductors: visual illusions, the Kansas walkway collapse, how to tell fake Englishmen, surprising results about medical tests, and more. Today, let's look at some mathematical paradoxes and oddities. The Will Rogers Phenomenon I mentioned somewhere in a recent post about jobs that the median income in the US is $30K/year (and, by the way, the cutoff for being in the 1% is $250K, which I find surprisingly low). The median income in Mexico is $10K/year. But here's something that seems a little paradoxical. If a relatively high-earning Mexican making $15K/year (50% above the median income) moves to the US, then the median income in both countries falls. If our hypothetical Mexican makes $25K now he or she is in the US, then the median income in both countries still falls. The headlines seem bad: median income falls. But the only change is that the individual is making a lot more money (but is now counted in the US statistics and not the Mexican ones). Moral: be careful what you wish for. Why is this called the Will Rogers Phenomenon? Because of a remark attributed to him. He was a comedian from Oklahoma, who died in 1935, so this remark must have been pretty late in his life since the dustbowl is usually dated from 1930, the "dirty thirties": When the Okies left Oklahoma and moved to California, they raised the average intelligence level in both states. Interesting that in 1930 Californians were considered to be stupid. Legs I used the median rather than the mean above, since those are values that are easy to find. Medians are also less affected by outliers (if Bill Gates walks into a party, the mean net worth might go up to millions of dollars, but the median just ticks up a notch). Another fun paradox about means: Almost everyone has more than the average number of legs. I told this recently to a woman who has a degree in applied math and even she didn't get it. So I had to explain it. So just in case, here's the quick explanation. Nobody has 3 legs. Not many, but some, have 1 leg. So the average number of legs is something like 1.999999 and so almost everyone, at 2 legs, has more than the average. Moral: the mean does not have to be "somewhere near the middle of the population." Simpson's Paradox Here is another paradox, known either as Simpson's Paradox or as the Yule-Simpson Effect. It is easiest to just show an example. Here are the baseball batting averages of Derek Jeter and David Justice during the 1995 and 1996 seasons. 1995 1996 Combined Jeter 12/48 .25 183/582 .314 196/630 .310 Justice 104/411 .253 45/140 .321 149/551 .270 Here is the paradox: if you look at the numbers, Justice has a better average in both 1995 and 1996. But add the two years together, and Jeter has the better two-year batting average. A similar effect was noticed with a study of gender bias at UC Berkeley in 1973. The numbers showed that men were more likely to be admitted than women. But for all the big departments in the university, women were more likely to be admitted than men. The reason for the anomaly was that it seemed that most of the women applied to very competitive big departments, such as English and Psychology, where they lost out (to other women, mostly), whereas men applied to less competitive departments (like engineering, what today we call STEM but didn't in 1973) where they mostly got admitted, even though women who applied to do engineering were even more likely to get admitted than men who did (even in 1973, engineering departments were desperate to find the few women who were interested in engineering, and any woman with good enough math skills would be accepted). Here's another one (all these facts are true): smoking during pregnancy can cause low birth weight babies, and low birth weight babies have higher mortality than normal birth weight babies. But low birth weight babies born to smokers have lower mortality than overall low birth weight babies. The reason is that smoking is a relatively benign reason for a low birth weight baby, compared to really serious diseases. The main reason that low birth weight babies have a higher mortality rate is that some really serious problems cause both low birth weight and premature death. A smoking mother may cause a low birth weight, but it won't kill you. Moral: the moral is not that you should smoke during pregnancy! Non-transitive Dice It is possible to design 3 dice such that the orange one will, on average, roll a higher number than the yellow one. The yellow one will, on average, roll a higher number than the green one. And here's the weirdness. The green one will, on average, roll a higher number than the orange one. This means you can play against someone, let them choose the first dice, and no matter which one they pick, you can pick one that will (on average) beat it. We are just not used to non-transitive things like that. It seems that if the Giants normally beat the Dodgers, and the Dodgers normally beat the Rockies, then you wouldn't expect the Rockies to beat the Giants. But it's actually not that hard to set up 3 attributes of each team in such a way that each team beasts the other on two attributes, and not unreasonable that if a team beats another on two attributes and loses on just one, that it would win the game. At least in mathematics land. The dice in the image to the right has the property that I described above. Demonstrating this in a blog post doesn't really work, so here is a video with the marvelously named David Spiegelhalter, (which means "mirror holder") who has the even more marvelous title as the Professor of the Public Understanding of Risk at Cambridge University (the real one, not the upstart Cambridge where there are a couple of other minor universities). By the way, Dr Spiegelhalter is a regular guest on the BBC program/podcast More or Less which I highly recommend. https://youtu.be/zWUrwhaqq_c Tomorrow I hope you had a great Labor Day, and we'll be back to semiconductors tomorrow. In fact, tomorrow's post is about something that happened 20 years ago today, but since today is a holiday, I'm covering it a day late. It is both a personal story, and a Cadence story. Sign up for Sunday Brunch, the weekly Breakfast Bytes email.
↧
Ambit Design Systems
Twenty years ago today, Cadence announced it was acquiring Ambit Design Systems. Actually, the anniversary of the announcement is really September 3rd, but since that is a public holiday this year, I slipped it a day. The actual acquisition didn't close for over a month, for all the usual due diligence reasons, plus one unusual one. As is standard in a VC-funded company, the conditions for acquisition require each round of finance to approve the acquisition voting as a round (to protect, say, the A-round selling out at less than the B-round paid). Ambit got up to round I, I think, so 9 rounds of investment over its drawn-out life. One of those rounds was by LSI Logic when they decided to use our synthesis tool, BuildGates, extensively. Since they were the only investor in that round, LSI Logic had to approve the acquisition. They were in a position to block the acquisition and were, of course, also a customer of Cadence for design tools. I wasn't involved, but it would have been interesting to have been a fly-on-the-wall in that negotiation. I was the VP Engineering at Ambit. But I wasn't there from the beginning, I had joined Ambit just a year earlier after the acquisition of Compass Design Automation by Avant! (which closed at midnight on Thursday, and I quit on Friday, and started at Ambit on Monday—I worked for Avant! for 8 hours). Twenty years ago was still the era when acquisitions of EDA startups had good exit valuations. Cadence paid $260M for Ambit. That wasn't even a record at the time, they had paid over $400M for CCT (shape-based routing) and would pay something closer to a rumored $1B for Silicon Perspective (mainly because it contained an uncapped earnout clause). Cadence had tried to develop their own synthesis tool called Synergy but it never got traction in the marketplace. I don't have any insight into how good or bad Synergy's technology was, but I can guess from the behavior of the Cadence sales force after the Ambit acquisition that the sales force would have ignored it. They were used to deal where they tried to get everything except synthesis, and they were not going to jeopardize the big deal with an immature synthesis tool. Ambit salespeople had nothing else to sell than a synthesis tool, so they were more focused. This is the difference between two types of salespeople, what I call hunters and farmers. When you are a startup, winning any business is good. We started to be really successful, going from less than $1M in revenue the year before I arrived, to over $10M the following year. We never got to the end of the year after that since by then we were part of Cadence. But in addition to revenue, some business is truly strategic, such as LSI Logic, which had come with both an investment and the credibility that a public endorsement from a top ASIC vendor brought. During the year that I was at Ambit, I knew that Philips Semiconductors (now NXP) was Cadence's biggest customer at the time. I also knew the management of the CAD organization well from my time at VLSI Technology and Compass, since we had a major agreement with Philips (and I'd been based in Europe for over 5 years). Philips had a very centralized CAD organization: the decisions made in Eindhoven about what tools to purchase were pretty much mandated through the whole company. Winning Philips was therefore very important. If we could win the evaluation, there was the possibility of being the "standard" synthesis tool inside a big account, thus, getting a series of big orders. As VP Engineering, there wasn't a whole lot I could do about the Philips account and an evaluation happening in Europe. But I prioritized bugs at Philips to do what I could to make the evaluation successful. But becoming the standard synthesis tool inside Cadence's biggest account had another upside: Cadence would pretty much have to acquire us. BuildGates won, and Cadence did. Timing is everything. I firmly believe that if Ambit had got its synthesis tool working solidly a couple of years earlier, then we wouldn't have been successful. The market was not yet looking for an alternative synthesis tool. One thing that changed was that, eventually, the ASIC companies (like the aforementioned LSI Logic) were not happy that they had to deal with a monopoly synthesis tool. But the monopoly was not on synthesis technology, Ambit's was clearly good, and maybe even Synergy's had been. The monopoly was on ASIC vendor library support. You can't sell a synthesis tool without library support. But monopoly ASIC vendor library support was something the ASIC companies could do something about, they owned those libraries. So they did. More and more ASIC vendors announced support for BuildGates, without an obvious business justification other than being able to get cheaper synthesis tools even if they didn't win any new ASIC designs. After all, our penetration of their customer base was growing, but still pretty small. It turned out that the BuildGates synthesis tool was pretty good, but pretty good isn't enough to get the average customer to switch. However, there was one area where we were more than pretty good. We could handle huge designs without requiring them to be split up into many smaller blocks. If you had a huge design, and couldn't get it synthesized, you would buy a license or two just for that design, and not care whether Ambit stayed in business. In fact, even if you had split the design up into small blocks already, breaking up the time budgets for the whole design was often poor, so sucking the entire design in and synthesizing it in one go would produce much better results. Our AEs soon learned not to fall into the trap of doing a benchmark on one of the small blocks, but to take on the challenge of the entire design. Overnight, the benchmark might be over and we'd won. So a few weeks later, 20 years ago, I joined Cadence for the first time. Sign up for Sunday Brunch, the weekly Breakfast Bytes email.
↧
↧
Whiteboard Wednesdays - What You need to Know About ISO26262-2018 2nd Edition
In this week's Whiteboard Wednesdays video, the first in a multi-part series, Scott Jacobson explores the changes in the upcoming ISO26262-2018 standard update and how they affect semiconductor design. https://youtu.be/QFV3KIUwJxs
↧
APAC IC Design Contests
China Graduate IC Design Contest Contest Duration: April – August 2018 This is Cadence’s second time supporting the China Graduate IC Design Contest. This contest provided graduate students with the opportunity to develop creative electronics design to be applied to a variety of propositions in the areas of: Integrated Circuits and Smart Terminal Automatic Control and Mechatronics Communication and Network Technology Technical Exploration and Engineering Application Software Design and Simulation In total, 254 teams enrolled in the contest and 148 teams, 600+ graduate students, were invited to attend the final on-site contest from August 11-12, 2018. In order to help the finals run smoothly, Cadence provided EDA tools as well as an application engineer to install the software and provide technical support. The graduates participated in the three-phased competition, which included computer test, defense, and a presentation from the top 13 teams. Taiwan IC Design Contest Contest Duration: February – May, 2018 Cadence supported the Taiwan IC Design Contest for the first time this year! 573 teams, 1,093 students, from 38 universities enrolled to participate in this contest. They had to create a full design for one of the five categories: Graduate-level Full-Custom Design Category Graduate-level Cell-Based Digital Circuit Category Analog Circuit Category Undergraduate-level Full-Custom Design Category Undergraduate-level Cell-Based Digital Circuit Category The finals were held from May 2-4, 2018 and 186 teams, 348 students, were invited to attend. Cadence provided EDA tools and online training for the students to learn the Cadence technology. The winners were honored at the Award Ceremony held on July 17, 2018. Jess Yang, DSG Director, represented Cadence by giving a talk titled “Do We Still Need Hardware IC Design Engineer in The Coming Decades?” in which they encouraged students to join the chip design industry and address the importance of EDA in their design career. This was followed by a Q&A panel discussion. Cadence Academic Network was honored to be a part of this year’s contests and impressed by the innovative designs of the competitors. We look forward to supporting more APAC IC Design Contests in the years to come.
↧
And Then There Were Three: GLOBALFOUNDRIES Drops 7nm to Focus on Other Geometries
If you watch the Tour de France cycling, then everyone goes along on the flat in a pack called the peloton, since the effort involved surrounded by other cyclists can be less than 20% of the effort of riding alone. But in the mountains, the pace is slower, the road is steeper, and there are no aerodynamic effects. The top cyclists still often stay together, but not everyone can keep up and gradually the weaker cyclists get "dropped" and fall off the back. Well, GLOBALFOUNDRIES (GF from now on) just got dropped off the back of the leading edge process peloton. GF put out a press release last week with the innocuous title GF Reshapes Technology Portfolio to Intensify Focus on Growing Demand for Differentiated Offerings . On the same day, AMD put out an equally innocuously titled communication Expanding our High-Performance Leadership with Focused 7nm Development . What this actually means is that GF is abandoning developing its 7nm process, and AMD is switching to TSMC 7nm for its future microprocessors. That leaves TSMC and Samsung as the only leading-edge foundries, and Intel (and Samsung with another hat on) as the only leading edge IDMs. Of course, there are specialist memory suppliers too, such as Samsung (with yet another hat on), SK Hynix, and Micron. Ironically, at SEMICON West earlier this summer, I wrote two posts, one based on an interview with Gary Patton GLOBALFOUNDRIES' CTO State-of-the-Roadmaps and 5nm: 7nm Is Just a Dress-Rehearsal based mainly on a presentation by GF's Eric Hoster. In the first of those posts, Gary told me that 7nm designs would be taping out in the second half of this year with production in 2019. The second post was mostly about the challenges facing EUV for 7nm and 5nm. In the Gary Patton interview post referenced above, I included a history of AMD and GF. The three line version is that AMD sold its fabs to ATIC, the investment arm of the government of Abu Dhabi, who created GF as a standalone foundry business. They then acquired Chartered Semiconductor and IBM's Semiconductor Division. As part of the deals, both AMD and IBM were committed to GF as a foundry for some of their product lines. There was a change of CEO in March when Tom Cauldfield took over as CEO of GF. He is a manufacturing operations guy by background, so reading between those lines is that there is an increased focus on building a profitable manufacturing business and working with a more limited budget than would be required for 7nm and 5nm. Since EUV is not required for older nodes, I am assuming that the work on EUV (or much of it) will also be put on hold. Building a modern fab is a $10+B proposition, and developing a leading-edge process is measured in billions of dollars too. GF has been building out fab 8 in Malta in upstate New York (above), which currently runs 14/12nm and, I believe, was planned to run 7nm (but could not run 5nm). I don't have any insight into whether the change was driven by losing AMD. Without AMD, GF doesn't really have volume customers to fill the fab. On the other hand, if GF was late with their internally developed 7nm process, then AMD would be forced to use other foundries to be competitive in their microprocessor business (their GPU business, the old ATI, was already using TSMC). One of the remaining mysteries is what IBM will do for leading-edge node manufacturing for its servers going forward. Even though its older servers are in a partially depleted IBM-only process, I always assumed they would be a major customer for GF's 7nm. IBM also has kept a significantly sized team of semiconductor technology researchers, who now seemed to be all-revved-up with no place to go. Although the leading edge processes like 7nm get the spotlight, a huge percentage of designs are done in 14/16nm, 28nm, and older processes. The design cost is less, the mask cost is less, the fabs are already fully depreciated. GF also has FD-SOI which can do some things that FinFET processes cannot, in particular putting RF on the same die as digital (my understanding is that you can't do RF with FinFET due to the high gate capacitance, but I make no claim to being an RF expert, I just know enough to be dangerous). It is also cheaper to manufacture, with a shorter cycle time. So from a business point of view, the non-leading edge processes are important, even if a little boring. But boring is often where a lot of the money is. It reminds me of a guy I sat next to on a plane once who sold concrete in the midwest. It is a very boring business, but it returns over 30% of its invested capital every year. A small town can support one concrete plant but not two. So the competition for that one plant is in the next town over, which may be 50 miles away. Unlike silicon chips, where competition can be anywhere in the world, concrete is heavy. Competition 50 miles away is no competition at all. GF wants to be the concrete manufacturer of semiconductor, not so sexy but nicely profitable. Moore's Law has stopped, not in the sense that 7nm and 5nm will not happen, but in the sense that the economics have changed. The old rule of thumb was that a new process node would be twice the density of the old node, at a 15% cost increase, leaving a reduction in the cost of a given design (or the cost-per-transistor, which comes to the same thing) of 35%. But the cost reduction has pretty much stopped. If you need lower power, or a lot more transistors, these nodes are attractive. If you don't, then there is no economic driver like there was at, say, 180nm where you couldn't compete if your competition moved and you didn't. The GF press release summarizes their strategy going forward: GF is intensifying investment in areas where it has clear differentiation and adds true value for clients, with an emphasis on delivering feature-rich offerings across its portfolio. This includes continued focus on its FDX platform, leading RF offerings (including RF SOI and high-performance SiGe), analog/mixed signal, and other technologies designed for a growing number of applications that require low power, real-time connectivity, and on-board intelligence. GF is uniquely positioned to serve this burgeoning market for “connected intelligence,” with strong demand in new areas such as autonomous driving, IoT and the global transition to 5G. So it's like the old Genesis rock group who gradually lost members until "Then There Were Three". Now there are just three leading-edge semiconductor manufacturers. Sign up for Sunday Brunch, the weekly Breakfast Bytes email.
↧
Numbers Everyone Should Know
At the recent HOT CHIPS, Paul Turner of Google Project Zero talked about numbers everyone should know. These numbers, actually latencies, seem originally to come from Peter Norvig but have been updated by a number of people since his original table, since processors have got faster (but most other things have not). One reason that the network delays have not changed much is one number that every software engineer (and semiconductor designer) should know: light travels a foot in a nanosecond. As Grace Hopper said (in the video I put at the end of this post), "there are a lot of nanoseconds between earth and a geostationary satellite." When I was an undergraduate, the head of the Cambridge Computer Laboratory was Maurice Wilkes, who had worked on digital computers since EDSAC, one of the first programmable digital computers, turned on in 1949. In a seminar, I remember someone challenging him that computers could not get any faster due to these speed of light considerations. In those days, a mainframe CPU might be in one big cabinet and the main memory in another big cabinet on the other side of the computer room (I remember when they added the...gasp...second megabyte of memory to the university time-sharing-service mainframe). Anyway, Wilkes thought for a moment before saying, "I think computers are going to get a lot smaller." Which, of course, they did with the invention of the microprocessor. With a 3GHz clock, light travels less than 4" per clock cycle. The Numbers CPU cycle 0.3ns L1 cache reference 0.5 ns Branch mispredict 5 ns L2 cache reference 3 ns L3 cache reference 28 ns Main memory reference (DRAM) 100 ns Send 2K bytes over 1 Gbps network 20,000 ns Read 1 MB sequentially from memory 250,000 ns Round trip within same datacenter 500,000 ns Disk seek 10,000,000 ns Read 1 MB sequentially from network 10,000,000 ns Read 1 MB sequentially from disk 30,000,000 ns Send packet CA->Europe->CA 150,000,000 ns If you only remember two of these, pick the fact that a memory access to cache is 0.5ns, but to DRAM is 100ns. That's 200 times as long. A huge fraction of any modern microprocessor is doing its best to hide that inconvenient fact. What If a Clock-Cycle Was a Second The problem with numbers like that is that they don't mean anything, even to people who deal with them every day. Billions. Picoseconds. Nanometers. Ångstrom units. Gigabytes. Zettabytes. I deal with these units all the time but don't have any intuitive feel for them. When I taught a course on computer networking at Edinburgh University, one thing I liked to do was to get people to work out how long various things took if a computer clock-cycle was one second. This wasn't an original idea, I think I had been given the exercise when I was an undergraduate. In the days when I was teaching, computers like a VAX 11/780 were roughly 1 MIPS, so this was actually a slow down of a million times. Today, the slow down is much greater. Computer networks in that era ran at 56kbps or 64kpbs, so we could use our imaginary one-second-per-clock computer to see just how slow that was even to the computers of that era. Another interesting exercise was to work out the bandwidth of a truck full of magnetic tapes (these days you can use SD cards) driving at 60mph on the freeway. Now you know why Amazon has Snowmobile for petabyte-sized data transfers to AWS. It's a container of SSD drives that is moved on a 18-wheel truck (see the pic to the right). Some of the ratios really bring home just how big the mismatches are. The CPU runs at one cycle per second, and that's how long a register-to-register operation takes (and modern processors can do several of them at the same time). An operation involving memory in the L1 cache (such loading a value from memory into a register) just takes two seconds, twice as long. But going out to DRAM takes 7 minutes. That is a huge difference that computer architects have largely hidden with multi-level caches, out-of-order-execution, and multiple execution units. System Event Actual Latency Scaled Latency One CPU cycle 0.3 ns 1 second Level 1 cache access 0.5 ns 2 seconds Level 2 cache access 2.8 ns 10 seconds Level 3 cache access 28 ns 2 minutes Main memory access (DDR DIMM) 100 ns 7 minutes SSD I/O 50–150 μs 1.5–4 days Rotational disk I/O 1–10 ms 1–9 months Internet packet: San Francisco to Europe and back 150 ms ~10 years Time to type a word 1 second ~1 century Time to open Powerpoint on my Mac ~10 seconds ~1 millennium To put the amount of computer power that is wasted into perspective, NASA had a total of 1 MIPS to go to the moon. It takes 10 days worth of all NASA's computers to open PowerPoint. Three Numbers in Computer Science There is a well-known aphorism that there are only 3 numbers in computer science. I have tried in the past to track down who first came up with this, but it seems to be lost in the fog of time. The three numbers are 0,1, and ∞ (infinity). The reasoning for this is that there should be no impossible things, if there can truly only be one of something then there should only be one, and if there should be any other number then you should assume it might be arbitrarily large. Actually, in my experience, often when there is only 1 of something, you should opt for the infinite case anyway. Some examples: When designing a chip, there is only one chip, and thus only one process technology. But now we have 3D packaging and chiplets and that is no longer true. There is only one processor in a microprocessor...but now we have multicore, and offload processors, and supercomputers. In the early days of chip design, there was only one power supply, and it didn't even appear in the netlists. It took a lot of messy specification to handle multiple power supplies in CPF and UPF, including level shifters and retention gates that didn't appear in the netlist either. It might have been a lot cleaner to assume there was more than one power supply in the first place. Oh, and here's a fun fact. There are only three numbers in computer science...but three is not one of them. Grace Hopper Explains Nanoseconds https://youtu.be/JEpsKnWZrJ8 Sign up for Sunday Brunch, the weekly Breakfast Bytes email.
↧
↧
GLOBALFOUNDRIES Drops 7nm to Focus on Other Geometries
GF put out a press release last week with the title GF Reshapes Technology Portfolio to Intensify Focus on Growing Demand for Differentiated Offerings . What this actually means is that GF is putting its 7nm process development on hold indefinitely. So why would GF do this? Although the leading edge processes like 7nm get the spotlight, a huge percentage of designs are done in 22nm, 28nm, and older processes. The design cost is less, the mask cost is less, the fabs are already fully depreciated. GF's FD-SOI can do some things that FinFET processes cannot, in particular putting RF on the same die as digital (my understanding is that you can't do RF with FinFET due to the high gate capacitance, but I make no claim to being an RF expert, I just know enough to be dangerous). It is also cheaper to manufacture, with a shorter cycle time. So from a business point of view, the non-leading edge processes are important. GF have historically had a dual roadmap process strategy (for details, see my post GlobalFoundries' Dual Roadmaps) and now they are focused on one. There are older process nodes too, of course, but the FD-SOI roadmap starts at 22nm with 22FDX, and also adds in eMRAM. The next node is 12FDX which also adds in eNVM. I would predict that the next node after that on that roadmap will be something like 8nm, or as far as they can push FD-SOI without needing to use EUV. By going for differentiated processes that are not insanely expensive to design and manufacture, GF is looking to create a good business, even if it seems a little boring. But boring is often where a lot of the money is. It reminds me of a guy I sat next to on a plane once who owned several concrete plants in the Midwest. It is a boring business, but it returns over 30% of its invested capital every year. A small town can support one concrete plant but not two. So the competition for that one plant is in the next town over, which may be 50 miles away. Unlike silicon chips, where competition can be anywhere in the world, concrete is heavy. Competition 50 miles away is no competition at all. GF wants to be the concrete manufacturer of semiconductor, not so sexy but nicely profitable. Another analogy I like to use sometimes is with cars. The most advanced nodes are like Formula-1 racecars, the fastest thing you can build if cost is not much of a consideration. To go fast if you care about cost, a Porsche might be a better choice. And for many applications, a Toyota is just fine. The real money is in the Toyotas, not even the Porsches, and there is a reason Formula-1 teams need a lot of sponsors. Moore's Law has stopped, not in the sense that 7nm and 5nm will not happen, but in the sense that the economics have changed. The old rule of thumb was that a new process node would be twice the density of the old node, at a 15% cost increase, leaving a reduction in the cost of a given design (or the cost-per-transistor, which comes to the same thing) of 35%. But the cost reduction has pretty much stopped. If you need lower power, or a lot more transistors, these nodes are attractive. If you don't, then there is no economic driver like there was at, say, 180nm where you couldn't compete if your competition moved and you didn't. For many designs, GF's FDX processes are a sweet spot, with low manufacturing costs (a lot fewer masks than FinFET), straightforward to integrate RF and analog, and suitable for all designs except the very highest performance, which really will require 7nm FinFET. The GF press release summarizes their strategy going forward: GF is intensifying investment in areas where it has clear differentiation and adds true value for clients, with an emphasis on delivering feature-rich offerings across its portfolio. This includes continued focus on its FDX platform, leading RF offerings (including RF SOI and high-performance SiGe), analog/mixed signal, and other technologies designed for a growing number of applications that require low power, real-time connectivity, and on-board intelligence. GF is uniquely positioned to serve this burgeoning market for “connected intelligence,” with strong demand in new areas such as autonomous driving, IoT and the global transition to 5G. There was a change of CEO in March when Tom Cauldfield took over as CEO of GF. He is a manufacturing operations guy by background, so reading between those lines is that there is an increased focus on building a profitable manufacturing business and working with a more limited budget than would be required for 7nm and 5nm. Since EUV is not required for older nodes, I am assuming that the work on EUV (or much of it) will also be put on hold. So now there are just three manufacturers of 7nm FinFET. Technically, in FD-SOI there are also three. ST developed FD-SOI at 28nm and licensed it to both Samsung and GF. But only one company seems to really be taking FD-SOI seriously with a roadmap to the future: GlobalFoundries. Sign up for Sunday Brunch, the weekly Breakfast Bytes email.
↧
What's For Breakfast? Video Preview September 10th to 14th 2018
https://youtu.be/iv9wdAVB6vg Coming from CDNLive India (camera Seena Shankar) Monday: CDNLive India Tuesday: Numbers Everyone Should Know Wednesday: Spectre/Meltdown and What It Means for Future Design 1 Thursday: Spectre/Meltdown and What It Means for Future Design 2 Friday: Spectre/Meltdown and What It Means for Future Design 3 www.breakfastbytes.com Sign up for Sunday Brunch, the weekly Breakfast Bytes email.
↧
PCAST: The President's Council of Advisors on Science and Technology
In January 2017, a report Ensuring Long-Term U.S.Leadership in Semiconductors was delivered to the President. It was not yet the 20th of the month, so that was still President Obama. I read the report at the time but made the mistake of not downloading it. Then it vanished. I think it was simply that whitehouse.gov had to be moved to obamawhitehouse.archives.gov since it soon re-appeared once the search engines found the new location. PCAST is the President's Council of Advisors on Science and Technology. There currently is none, although President Trump has said that he will staff it. The Obama-era PCAST commissioned the report, and It was delivered literally days before the end of the Obama administration, presumably so that it was officially published. The Working Group PCAST themselves didn't produce the report. They created a working group who actually knew something about semiconductors, named prosaically the PCAST Ensuring Long-Term U.S. Leadership in Semiconductors Working Group. I think it is worth listing the members of the group, since it gives the report some credibility, not just in Washington but with people like me, and probably you. Let's face it, if they assembled a bunch of politicians to write the report, the sort of people who think the internet is a series of tubes, then we'd all give it the attention it deserved. This is the group. I've added more color than is in the report, and also updated people's position where I know what it is. Co-Chair: John Holdren, Assistant to the President for Science and Technology Co-Chair: Paul Otellini, former CEO of Intel. He died in October last year. There is a short section on his passing in my post Xcelium Simulation on Arm Servers . Rich Beyer, who was CEO of Freescale. He was also COO of VLSI Technology for a few years, and I used to meet with him once per week to explain to him how chips were designed (he was a marketing guy by background). Wes Bush, CEO of Northrop Grumman. Diana Farrell, CEO of JP Morgan Chase Institute. John Hennessy, who was the founder of MIPS, was President of Stanford for many years, is now Chairman of Alphabet (parent of Google), and just was the co-recipient of the 2018 Turing Award with Dave Patterson. Paul Jacobs, for many years CEO of Qualcomm and then its Chairman (but recently retired). Ajit Manocha, former CEO of GlobalFoundries, and currently the CEO of SEMI, see my post Ajit and the History of SEMI for more background. Jami Miscik, co-CEO of Kissinger Associates, with a background in the CIA. Craig Mundie, President of Mundie Associates, but was Chief Research and Strategy Officer at Microsoft. He is also the only person in the working group to also be a member of PCAST itself. Mike Splinter, former CEO of Applied Materials. Laura Tyson, Professor of the graduate school at UC Berkeley. Various other staff appointments. I think we can all agree that this is a group that knows something about the semiconductor industry and global competitiveness. Letter to President Obama The report opens with a letter to President Obama. I will just extract what I consider the two key paragraphs, which are actually a rephrasing of a couple of paragraphs from the executive summary of the report itself. Today, U.S. semiconductor innovation, competitiveness, and integrity face major challenges. Semiconductor innovation is already slowing as industry faces fundamental technological limits and rapidly evolving markets. Now a concerted push by China to reshape the market in its favor, using industrial policies backed by over one hundred billion dollars in government-directed funds, threatens the competitiveness of U.S. industry and the national and global benefits it brings. The report looks at these challenges in greater detail. The core finding of the report is this: only by continuing to innovate at the cutting edge will the United States be able to mitigate the threat posed by Chinese industrial policy and strengthen the U.S. economy. Thus, the report recommends and elaborates on a three pillar strategy to (i) push back against innovation-inhibiting Chinese industrial policy, (ii) improve the business environment for U.S.-based semiconductor producers, and (iii) help catalyze transformative semiconductor innovation over the next decade. Delivering on this strategy will require cooperation among government, industry, and academia to be maximally effective. In the executive summary, a third paragraph appears between these two, that I also think is worth quoting in full, since it points out something critical, that policy changes are required, not just sitting back and letting the major semiconductor companies slug it out for market share. That is critical, too, since even the military today depends heavily on what they call COTS parts, which stands for Commercial-Off-The-Shelf, meaning that even the military needs a strong domestic industry. The global semiconductor market has never been a completely free market: it is founded on science that historically has been driven, in substantial part, by government and academia; segments of it are restricted in various ways as a result of national-security and defense imperatives; and it is frequently the focus of national industrial policies. Market forces play a central and critical role. But any presumption by U.S. policymakers that existing market forces alone will yield optimal outcomes – particularly when faced with substantial industrial policies from other countries – is unwarranted. In order to realize the opportunities that semiconductors present and to effectively mitigate major risks, U.S. policy must respond to the challenges now at hand. The Six Strategic Responses The middle of the report lists six strategic responses. Of course there is detail added to each one, but you'll have to read the report itself for that: Win the race by running faster. Focus principally on leading-edge semiconductor technology. Focus on making the most of US strengths rather than trying to mirror China. Anticipate Chinese responses to US actions. Do not reflexively oppose Chinese advances. Enforce trade and investment rules. The recommendations resulting from these (again there is more detail in the report): Recommendation 1.1: Create new mechanisms to bring industry expertise to bear on semiconductor policy challenges. Recommendation 2.1: Boost the transparency of global advanced technology policy. Recommendation 2.2: Reshape the application of national security tools, as appropriate, to deter and respond forcefully to Chinese industrial policies. Recommendation 2.3: Work with allies to strengthen global export controls and inward investment security. Recommendation 3.1: Secure the talent pipeline. Recommendation 3.2: Invest in pre-competitive research. Recommendation 3.3: Enact corporate tax reform. Recommendation 3.4: Responsibly speed facility permitting. Recommendation 4.1: Execute moonshot challenges. Conclusion My main conclusion is that if you are involved in the semiconductor industry in some way, and you probably are if you are reading this, then it is worth a little of your time to read the report. The heart is just 20 pages long. If you can't manage that, then at least read the executive summary, which is a page and a half. One thing that I think is very positive is that the report was done during the Obama administration, but the actions starting to take place now are in the Trump administration. There is some continuity of purpose. Let me conclude with the final paragraph of the conclusion of the report: We strongly recommend a coordinated Federal effort to influence and respond to Chinese industrial policy, strengthen the U.S. business environment for semiconductor investment, and lead partnerships with industry and academia to advance the boundaries of semiconductor innovation. Doing is essential to sustaining U.S. leadership, advancing the U.S. and global economies, and keeping the Nation secure. Sign up for Sunday Brunch, the weekly Breakfast Bytes email.
↧
Virtuoso: The Next Overture – Introducing Design Planner
The new release of the Virtuoso platform (ICADVM18.1) offers groundbreaking analysis capabilities and an innovational new simulation-driven layout for more robust and efficient design implementation as well as extending our support for the most advanced process technologies. With this solution, we are able to significantly improve productivity through advanced methodologies and provide the most comprehensive set of solutions in the industry with an interoperable flow across chip, package, module and board. If we told you we are developing a product that would allow you to plan more efficiently at the top level, block level, and at the cell level without having to spend all the time that you might be spending today, does that sound surreal? May be it does sound surreal. But, the fact is that we are soon bringing to you an innovative enhancement in the form of Design Planner that offers pathbreaking design planning capabilities. Design Planner is currently undergoing its final rounds of refinement at the Cadence development center, and it is being well received by our early-access customer partners. The enhancement offers advance methodology for both mature and advance node layout designs, allowing a seamless layout–place–route capability. And when we say we offer a seamless layout–place–route, we mean it. With the integrated Congestion Analysis capability, you can now get real time congestion analysis data to support you in your layout design planning, allowing you to make more informed planning decisions early on in the design life cycle. What is Innovative about Design Planner • Hierarchical schematic-driven layouts that combine the best of top-down and bottom-up design methodologies while avoiding the shortcomings of both • Hierarchical visualization that allows you to easily view or hide details on any level, anywhere in your design, to only view what you need, when you need it • Hierarchical and congestion aware floorplanning and placement that provides automated and assisted productivity • Hierarchical routing and congestion analysis that makes real routing and congestion analysis information available upfront Looks exciting? Watch out for our upcoming Virtuoso platform ICADVM18.1 release. Related Resources Hierarchical Schematic-Driven Layout Methodology The Design Planner Flow For more information on Hierarchical routing and Congestion analysis, stay tuned to our upcoming blog post — Virtuoso: The Next Overture - Congestion Analysis with a New Perspective For more information on Cadence circuit design products and services, visit www.cadence.com . Contact Us For more information on the New Virtuoso Design Platform, or if you have any questions or feedback on the enhancements covered in this blog, please contact team_virtuoso@cadence.com . To receive similar updates about new and exciting capabilities being built into Virtuoso for our upcoming Advanced Nodes and Advanced Methodologies releases, type your email ID in the Subscriptions field at the top of the page and click SUBSCRIBE NOW. Rishu Misri Jaggi and Colin Thomson (Team Virtuoso)
↧
↧
Measurement of Phase Noise in Oscillators
The other day, I happened to sneak out some time for myself after having sent the kids to play in the neighborhood park. I made myself a hot cup of coffee and settled on the couch hoping to enjoy the silence in the house. But was it really silent?? Well... the constant tick-tock of the oscillating pendulum of the wall clock in my living room kept disturbing the silence in the air! Oscillators, they are used in almost all electronic gadgets we have around us. Oscillators are circuits that produce continuous and repeated waveform without any input. They create an alternating current waveform with the desired frequency. However, just like the clock in my living room, oscillators are known to inherently produce phase noise. This can disturb the performance and output of the circuit, making it essential to measure the phase noise of oscillators. Spectre RF allows you to measure the phase noise of oscillators in the Virtuoso Analog Design Environment products. It lets you characterize noise performance of the oscillators. Depending upon your circuit, you can set up the " pss and pnoise " analyses or the " hb and hbnoise " analyses in the Choosing Analyses form to measure cycle jitter (Jc), cycle-to-cycle jitter (Jcc), and amplitude modulation (AM) and phase modulation (PM) components of phase noise. This is an averaged noise measurement over one cycle of the oscillator. To learn more, watch the Measuring Phase Noise Oscillators video on Cadence Online Support. Click the video link now or visit Cadence Online Support and search for the video under Video Library . Related Resources Spectre Circuit Simulator and Accelerated Parallel Simulator RF Analysis in ADE Explorer User Guide Note : For more information on Cadence products and services for RF design, visit www.cadence.com . Click Subscribe to visit the Subscription box at the top of the page in which you can submit your email address to receive notifications about our latest RF Design posts. Jommy Thomas
↧
升级到Allegro17.2-2016的10大理由之4:行业领先的背钻能力
背钻的发展历程 15年来,在很多电子设计中处理5Gbps或更高频率的高速接口布线已越来越常见。在信号过孔上存在Stub的情况下,高速信号换层将会对信号完整性产生巨大影响。总的来说,这些短截线会造成阻抗不连续和信号反射,严重影响有效数据传输速率的提升。 (点击查看大图) 如何消除电子短截线? 使用一种称之为背钻的制板工艺,有时也被称为控制深度过孔。 做好规化和控制,保证高速信号走在特定的布线层,以此来减小Stub的影响。 用盲埋孔和微孔技术来布高速信号,这种方案可以解决一些局限和担忧,但会增加制造成本,而且压接连接器的管脚仍需用背钻技术来消除Stub。 (点击查看大图) 早些年,制造商会根据关键网络列表,识别使用背钻的地方并做适当的调整。 在设计中引入背钻过程,有时对管理来说是个噩梦,需要与制造商更紧密合作。制造商会移除尽可能多的指定高速信号的短截线,根据增加的背钻尺寸调整每个背钻位置的特性、验证铜间距,来维持设计完整性。 为了让设计数据传递更顺畅,在 Allegro15.7 中,早已为简化制造端的数据处理打下了稳固的基石。 作为曾经的客户,2005年底,我曾是Allegro ® PCB Designer 15.7 Beta测试团队的一员。我很高兴见证/测试了Allegro新的背钻解决方案。通过允许设计人员标识需背钻的网络,基于器件和引脚的属性分析并识别背钻位置等功能,使Allegro更上一层楼。需背钻的位置包含在背钻报告中,标有特殊的背钻符号,生成用于生产的NCDrill文件。即使有了这些提升,但仍然存在人为确保一致性的步骤(支持背钻位置有多种焊盘,手动设置背钻禁止区,允许制造商调整背钻尺寸)。 随着时间的推移,可以清楚地认识到未来的增强将会改进这个流程,会提供分析设计并且调整背钻位置特性的功能,同时生成完整的制造数据包来实现流水化制造。 Cadence与制造商和客户合作,调整了现有的解决方案,除了制造商移除大部分后处理步骤,还增强了几个区域的工具来支持背钻流程。作为一位产品工程师,我能够根据我自己之前作为客户的经历、并搜集客户们的反馈来调整这些功能。 封装库中的焊盘支持背钻定义 有特定标识的背钻尺寸 增强背钻焊盘进入和焊接掩膜 增加层禁区/间距 典型的背钻位置 (点击查看大图) 制造 Stub 长度 - 背钻后保留 Stub 长度(高级配置) 从不可切割层向下测量的剩余制造短截线长度,其表示电介质目标背钻深度 (点击查看大图) 基于参数的设计层焊盘更新建成了背钻分析 (点击查看大图) 基于设计分析的改进模型可以快速定义 / 检查背钻层对规则 初始化:从顶层及底层的最深的背钻层 分析:最小化的电子短截线长度或最小化层对 (点击查看大图) 带有特殊钻孔标识的背钻直径,用来说明背钻方向和深度 基于焊盘定义的间距自动生成走线禁布区 在背钻位置不再需要创建特殊的焊盘或禁止区 (点击查看大图) Show element指令可以报告在背钻位置引脚/过孔的背钻数据 基于在焊盘定义的背钻数据,现在在钻孔图例和制造NCDrill文件中报告真实的背钻尺寸 不再需要制造商基于电镀通孔调整尺寸 Backdrill Legends现在报告不能切割层、深度和制造短截线信息 画出跨区域细节,现在报告背钻跨度 在背钻过程中识别全部的测试点 在背钻位置没有测试点或增加测试点之外的钻孔 提升后的背钻解决方案解决了所有的疑惑点,消除了由于引入背钻所带来的担心。不再增加制造商的一次性工程费用(NRE),不再增加关于引入不同过孔和叠加技术的成本。可以传递给制造商一个更完整的制造数据包,其中包含IPC-D-356和IPC-2581中的背钻数据信息,以及用于传达背钻意图的完整文档。 相关视频:背钻增强 https://youtu.be/zJrghEEIGZQ * 原创内容,转载请注明出处: https://community.cadence.com 相关阅读: 升级到Allegro17.2-2016的10大理由 Allegro最新技术 欢迎订阅“PCB、IC封装:设计与仿真分析”博客专栏, 或扫描二维码关注“CadencePCB和封装设计”微信公众号,更多精彩内容期待您的参与! 联系我们:spb_china@cadence.com
↧
New Sigrity 3D Workbench Used in Designing and Optimizing Next Generation High-Speed Connectors
2018 is going to be remembered as the year of 3D for Sigrity. As part of Cadence’s Sigrity 2018 release , we introduced the new Sigrity 3D Workbench technology included as part of the Sigrity PowerSI ® 3D EM Extraction Option (3DEM). This technology enables our users to import mechanical structures, such as cables and connectors, and merge them with the PCB so critical 3D structures that cross from the board to the connector can be modeled and optimized as one structure. When integrated into the Cadence Allegro ® environment, this allows PCB design teams to optimize the high-speed interconnect of PCBs, IC packages, and connectors in the Sigrity tool and automatically implement the optimized PCB interconnect in Allegro without the need to redraw, providing a much more efficient and less error-prone solution compared to alternatives using third-party tools. Our new 3D Workbench technology was highlighted in a customer presentation that won the Outstanding Presentation Award in the IC Packaging and PCB Design track at this year’s CDNLive Taiwan user conference, which featured over 40 presentations from six different technical tracks and had over 1000 attendees. The presentation, Design and Optimization for the Next Generation High Speed Connector , from Foxconn Industrial Internet focuses on optimizing the performance of a main PCB and an SFF-8654 high-speed connector for PCI-e Gen4 or SAS Gen4 applications using the new 3D Workbench technology. The Foxconn Industrial Internet presentation opens with a simulation methodology to optimize the SFF-8654 high-speed connector with the main PCB board. They next discuss frequency domain S-parameter analysis and compare simulation flows, the number of adaptive meshing elements and iterations, and S-parameter accuracy between Sigrity 3D Workbench and a 3 rd Party 3D Tool showing practically identical results. After confirming the performance and accuracy of 3D Workbench, S-parameter and FEXT & NEXT simulations are then run in 3D Workbench for their different cases and conditions. Finally, time domain TDR analysis and simulations are performed to complete the optimization process giving an overall connector TDR differential impedance meeting the 85W ± 10% requirement. To review and download Foxconn Industrial Internet’s presentation, click the image below. Thanks to Foxconn Industrial Internet for their outstanding presentation at this year’s CDNLive Taiwan and to all the other CDNLive contributors. Also, check out our What’s New in Sigrity web page to see the latest features and updates in our new Sigrity 2018 release. Have a great 3D year with Sigrity! Team Sigrity
↧
CDNLive India
CDNLive India took place last week. As usual, I made the long trip from California, nearly 30 hours door to door. There is aways something remarkable on these long flights and this time it was the WiFi pricing. It is nearly 50% more ($30.00 vs $21.99) for WiFi on the 5 hour San Francisco to Newark leg, than on the nearly 15 hour Newark to Delhi leg. CDNLive India is organized differently from the other multi-day CDNLives in Silicon Valley and EMEA. Areas of interest are divided between the two days, so nobody comes both days. This ps partly because over 2,000 people attend over the two days, and if they all stayed for both days then there isn't anywhere in Bangalore large enough to accommodate everyone. As it was, we were turning people away at the door since CDNLive was full to capacity. Jaswinder Ahuja, the President of Cadence India, welcomed everyone both days, pointing out that it was another record year, in both the number of papers submitted for possible presentation and in the number of people attending. Another interesting aspect of nobody being there both days is that the opening keynote can be the same both days. Lip-Bu Tan delivered it on Thursday, and Anirudh Devgan on the Friday. It was the same presentation, but given the different backgrounds, the keynotes had different looks and feels. I think Madhavi will cover the keynote on The India Circuit blog. Vinod Kariat Vinod gave the Cadence technical keynote, titled AI Enabled Systems and the Analog Renaissance. Electronics in general and semiconductor in particular are at the centre of the next industrial revolution. The highest visibility area is probably autonomous driving (ADAS), which is clearly coming even though the precise timescale remains a little murky. Vinod also pointed out that there are probably some tough problems in Indian traffic that you don't have on US freeways, such as all those rickshaws in the above picture. Reliability is one of the biggest challenges in automotive, especially in the power and analog area. The target defect rate for automotive is zero. It turns out that 80-95% of field failures are in the analog portion. The high temperature for automotive is 170°C, which is a problem since many failures are caused by thermal overstress. In addition, aging of transistors is accelerated at higher temperatures, which makes it a challenge meeting the 15-20 year lifetime for automotive electronics. Automotive is a huge challenge since we are building very reliable vehicles out of inherently unreliable semiconductor processes. A FIT is one failure per billion hours of operation. For a car we are looking for less than 10, but a semiconductor process is closer to 500 FITs. During his talk, Vinod covered a lot of products that I have covered already this year. So I won't repeat everything her, just give you some links: VIrtuoso 2018 , see Virtuoso 2018, a Fine Vintage Legato Reliability Solution, see Legato: Smooth Reliability for Automobiles Deep-learning driven analog design. See Cadence is MAGESTIC Virtuoso RF, which I am writing about but it hasn't appeared yet Quantus Smartview Low-power verification flow with Virtuoso Power Manager Liberate Trio, part of Cadence Cloud. See Liberate Trio: Characterization Suite in the Cloud Subash Chandrar of Texas Instruments The customer keynote on the first day was by Subash Chandrar of Texas Instruments, titled SoC Challenges and Opportunities in Automotive, Industrial , and IoT . Subash started off looking at the megatrends driving the industry. Semiconductors are penetrating our daily lives in more and more ways, making the world healthier, safer, and more fun. Over the last 30 years the driving forces were compute centric, then mobile centric, then, today, data centric. He is much more optimistic than I am about IoT, even though he admits that it is very fragmented due to the differing requirements. I think the volumes for most products will be too low for this to be an SoC market, rather than board level (or maybe chiplet-level) integration. Texas Instruments is very focused these days on markets with a high analog content. That is how they manage to be the most profitable semiconductor company at the moment. Three focus areas, which fit that descriptiob, are automotive, industrial, and IoT. One area Subash called out in particular is what he called "wire replacement" in vehicles. The wiring harness in a modern car can weigh as much as 50kg (about 100 lbs) and that obviously has negative effects on performance and gas mileage. If many of those wires can be replaced with automotive Ethernet (on twisted pair) or even wireless, that is a big weight saving. Funnily enough he talked about many sensors in cars being connected by wires—but the first one he mentioned explicitly is tire pressure monitoring, which obviously cannot be done by wire (it would be a very twisted pair!). One big driver in industrial is predictive maintenance. Nobody wants a machine to go down unexpectedly, much better to plan it. The big RoI is extending the operating life of equipment, since most industrial equipment operates in a harsh and unforgiving environment. Another challenge in all of these areas is that there will not be one standard for wireless networks. There are already dozens and that seems unlikely to change since they all have special characteristics that are ideal for some applications. In automotive, we have bathtub curves that measure early mortality, and later after many years, aging. We need to get reliability down to a miniscule number. You chance of being hit by lightning (in your life, I assume) is a out 1 in 700,000. Automotive failure needs to be less likely than that. Near the end, Subash turned to design and EDA. Increasingly, when I got to a keynote like this, it sounds a little as if the presenter has been indoctrinated by our system design enablement strategy. Increasingly, everything needs to be handled initially at the system level (for example, thermal analysis) and then pushed down into the details. Many constraints on the design, such as EM, power, reliability, or thermal, need to consider not just the whole chip, but the package, the board, and increasingly connectors, wiring and other stuff. The tools need to work seamlessly together. Other Sessions The rest of the day was taken up with parallel tracks. Ones that I thought were especially interesting were INVECAS, who develop a lot of GlobalFoundries' IP. This is perhaps more significant as a result of the announcement last week that GF would focus entirely on FD-SOI (and some other specialty processes like RF). Mathworks (somehow, without my noticing, they renamed themselves from "The Mathworks") and Cadence gave a joint presentation on how you can use Matlab's advanced analysis tools to directly dig into the details of simulation results. Watch for a post in a week or two on these two presentations. And later this week, I will write about day 2. Sign up for Sunday Brunch, the weekly Breakfast Bytes email
↧
↧
Spectre/Meltdown & What It Means for Future Design 1
At HOT CHIPS, one of the "keynotes" was actually a panel of what I'll call industry luminaries. They were discussing the implications of vulnerabilities such as Spectre, Meltdown, and the recently announced Foreshadow. This is the most important discovery in computer architecture in the last twenty years or so, and will affect how processors are designed forever. Later in the conference, for example, Intel presented their next generation processor, Cascade Lake, and discussed some of the changes they have made as a result. Later in the session, Jon Masters said that Red Hat alone has spent over 10,000 hours on these issues. I am going to cover the panel in detail. Obviously, it affects processor architects the most. But it affects anyone who uses processors, such as software engineers or SoC designers. Everyone needs to be aware of the implications of this. One takeaway, if this is going to be more than you want to know, is that we don't know how to completely protect against this type of attack without reducing processor performance to a few percent (under 5%) of what it is today. If you want more details on Spectre and Meltdown, then a good place to start would be my post Spectre and Meltdown: An Update . Or if you want to hear about it from one of the sources, then Paul Kocher: Differential Power Analysis and Spectre . Or if you want to hear about it from one of the panelists, try Spectre with a Red Hat and Spectre with a Red Hat 2 . An Introduction to Speculative Execution You can do whole advanced Masters-level courses on computer architecture that covers this, so in a few paragraphs, this is going to be the most basic of introductions. Moore's Law might be limping now, but over the last couple of decades, processor performance improved an enormous amount through a mixture of scaling and architectural innovation. For a decade it was improving at 45% per year. However, off-chip DRAM access did not speed up nearly the same amount. This meant that the processor could execute about 200 clock cycles in the time it took to do a DRAM access. The first solution was to add on-chip cache memory that was much faster. By keeping the frequently used instructions and data in the cache, those 200 cycles could be reduced to a lot fewer. Over time, we went to multi-level caches, with a mixture of small, very fast memories, and larger but no so fast memories. But for this introduction, we don't need to get into those details. We'll assume a fast on-chip cache, and slow off-chip DRAM. In round numbers, a cache access takes 0.5ns whereas a main memory access takes 100ns (hence the 200 cycle number). Most instructions and most data would come out of the fast cache, and so those 200 cycle delays were mostly avoided. But not all of them. Processor architects realized that the processor could do stuff while it was waiting since often many of the following instructions didn't depend on the value coming from memory, so the processor could get on and execute them anyway. This worked fine for every instruction except conditional branches. When the processor ran into a conditional branch, it could stop and wait for the values coming from memory to arrive, and then discover if the branch would be taken or not. Alternatively, it could take a guess as to whether the branch would be taken, and carry on executing instructions that didn't depend on the values it was awaiting from DRAM. This is known as branch prediction. It is beyond the scope of this little explanation as to how that is implemented, but you win a lot by just following the rule "assume every branch does what it did last time it was executed." However, there was one big complication. What if the processor guessed wrong? That is why it is called speculative execution since it is guessing whether the branch would be taken, but also doing it in a way that it could clean up after itself if it guessed wrong. After a conditional branch, the instructions are marked as dependent on the branch. If eventually the processor determines that the branch was really taken, then the instructions are retired and the processor moves on. If it turns out that the branch prediction was wrong, then all the instructions that were done speculatively are squashed, and the processor backs up to the conditional branch and starts to execute down the correct branch. To give you an idea how complex this can get, the most advanced processors might get over 200 instructions ahead, guessing that the branch at the end of a loop would be taken and running through the loop many times (before finally discovering that the loop actually ended many iterations ago, and have to sort out the mess). This is how all high-performance (so-called out-of-order or OoO) processors have been designed for about the last 20 years. From the programmer's point of view, the processor is executing the program in the order written. The way the processor is built, whether branches are predicted correctly or not, the results are exactly as if the instructions had been executed in order like the programmer imagines. Just faster. For 20 years, nobody saw any problem with any of this. But last year Spectre and Meltdown were discovered. People in the need-to-know groups who had to try and fix these problems knew about it last year. The rest of us found out in the first week of this year. For processor architects, it was not a Happy New Year. Meltdown is far easier to explain (and fix) so I'll give you a simplified overview of how it works. Let's say you want to read a byte of memory from the operating system that you shouldn't. You train the branch predictor so it will guess wrong that the code I'm about to describe will get executed. Also, you select an area of memory that has never been used so it is not in the cache. Then you do the following: read a byte from the operating system memory, and then use that byte to pick one of 256 locations in the selected area of memory and read the value from there (it doesn't matter what the value is). The processor will soon discover it got the branch wrong and squash all this. But there is a tiny thing that is different. One of the 256 locations in that selected area of memory is now in the cache (hot), because we read its value. Even though the read was squashed, the cache line is still hot. Since accessing a value in cache is 0.5ns and from DRAM is 100ns, it is not that hard to check the timing of all 256 locations, only one of which will not require 100ns. So we know what was in the byte even though the read itself got squashed, so in a sense "we never read it." As Paul Kocher said (in the post I linked to above): These should have been found 15 years ago, not by me, in my spare time, since I quit my job and was at a loose end. This is going to be a decade-long slog. The Problem Before going any further, let me emphasize the problem here. This is not a hardware bug in a single processor from a single manufacturer (I'll count Arm as a manufacturer here, although technically they license their designs to the people who actually do the manufacturing). This is a fundamental problem of the way in which processors are designed. Embarrassingly for all the people who work in the area, this is a weakness that has been hiding in plain sight for 15 to 20 years without a single person noticing (well, maybe the NSA and equivalents, who knows?). Even if you didn't understand my explanation of speculative execution, just take this one fact away. A cache memory access is 0.5ns, and a DRAM access is 100ns. Processor architects use every trick they can come up with to avoid DRAM access, and to find useful things to do during the long delays when they can't avoid it. If we took away these tricks, speculation and caches, then we would have a processor with under 5% of the performance of current processors. No smartphones, no cloud datacenters, and Windows 98 era laptops. Party like it's 1999 doesn't sound so good in the processor space. To make things worse, this has arrived as Moore's Law is running out of steam (and processors have hit the power wall too). So we don't even have a 2X factor that we could lose, and win it back with the next node. General purpose processors are simply not getting faster since we've run out of architectural tricks on the architecture side, and Dennard scaling on the semiconductor side. I'm getting ahead to the panel, but one thing Mark Hill pointed out is that these vulnerabilities are not "bugs" in the sense that the processor does not meet the spec. These processors all met their spec. The problem is more fundamental still, the way we specify architectures is wrong, since a correct implementation is vulnerable to these side channel attacks. In the aftermath of the discovery of Spectre and Meltdown, the immediate focus was on how to mitigate the problems with all the processors that were already out in the field. But the next step is to incorporate the knowledge of this type of attack into next-generation architectures. That was the focus of this keynote panel. The Panel There were 4 panelists at Hot Chips, chaired by Partha Ranganathan of Google. Each panelist gave a brief introduction, and then they got together as a panel and took questions from the audience. John Hennessy, currently Chairman of Alphabet (Google), but one of the inventors of RISC (for which he just shared this year's Turing Award) and co-author of the standard texts on computer architecture (along with his co-Turing-Award-honoree Dave Patterson). Paul Turner of Google. Google's Project Zero is one of the groups that discovered these vulnerabilities, and Paul was part of the group tasked with mitigation. Jon Masters of Red Hat, the person responsible for fixing up Red Hat's Linux as well as is possible. Mark Hill of University of Wisconsin at Madison and also on sabbatical at Google. Tomorrow Having tempted you with those names, I'll tell you what they actually said tomorrow. Sign up for Sunday Brunch, the weekly Breakfast Bytes email.
↧
Automation Is the End of the World (Or Not)
“Oh no! Automation is going to cause massive job losses, causing the downfall of society itself!” shriek the pundits who look at employment figures. Now, this may be true. There’s no question that robots and automation have already caused a loss of jobs. But this is not the first time that innovation and automation have happened in human history. And with each jump, I would argue that the world has become all the better for it. And now for a very brief history lesson. Hunter-Gatherers to Farmers in the Stone Age To quote an article in National Geographic : Taking root around 12,000 years ago, agriculture triggered such a change in society and the way in which people lived that its development has been dubbed the “Neolithic Revolution.” Traditional hunter-gatherer lifestyles, followed by humans since their evolution, were swept aside in favor of permanent settlements and a reliable food supply. Out of agriculture, cities and civilizations grew, and because crops and animals could now be farmed to meet demand, the global population rocketed—from some five million people 10,000 years ago, to more than seven billion today. A huge transition What did those people in the Fertile Crescent do with all the extra time and resources that they had at their disposal when they no longer had to hunt and gather every day just to have enough calories to survive? They reproduced. No longer did every single person in the tribe have to contribute to the food stock, and the “automation” of making the earth provide sustenance instead of searching for it changed the work of every human. Settlements, towns, and cities began to appear. The main tools for farming were made of stones, wood, pottery, and biodegradable materials, and it worked for a good long time. The Bronze and Iron Ages Fast forward about 7000 years: with all the successes in farming, there had to be a way to keep track of all the food that was stored, traded, and consumed. There were people who kept this information in their heads, until — oh my goodness, someone came up with the idea of writing things down. Sumerian cuneiform writing system and the Egyptian hieroglyphs were the first to show up (that we know of) in about 3200 BC, originally to keep track of who owed whom what. Egyptian Hieroglyphs Also around this time, people figured out how to make tools out of bronze, and later, iron. It’s a lot easier to harvest grain with a metal scythe than with blades made of stone or wood. This is another innovation that defined an age and led to more growth, migration, and an explosion in population. Are you sensing a theme? Innovation is not the death knell to civilization, it is a boon. Fast-Forwarding to Now The relationship between innovation and the fall of the feudal system in Europe is pretty well documented: with the extra time created by innovation, the serfs and peasants got rid of an entire socio-economic system. A new way to grind flour? Now all those millers who did it by hand were out of a job. New kinds of animal husbandry? More food and wool and dairy and fertilizer, reducing the need for as many workers. Governmental and socio-economic systems evolved with the ages, leading to the rise of urban centers and more scientific discovery and engineering innovation. The Renaissance saw a resurgence in art and science, leading to the economic and social rise of Europe. The Age of Discovery saw the innovation not only the technology required for long-distance nautical travel, but also advances in physics, chemistry, and more. With scientific discovery also came developments in philosophy, religion, and art The classic example of innovation changing the world is in the Industrial Revolution, which brought about factories that pumped out textiles, automobiles, clothing, products of all kinds — that were previously made by artisans and workers by hand. Now, instead of weaving cloth by hand or assembling cars piece by piece, those artisans and workers were working in factories that created fabric or building cars in a production line. These products became more affordable, giving rise to a middle class, thus changing the world economy yet again. What happened to those artisans of yesteryear? For better or for worse, these artisans and workers now worked in factories. The rise of the computer speaks for itself. This happened in the working memory of many of you reading this, I think; or at least of your parents’ generation. My favorite example of the ramifications of the development of the computer was highlighted in the recent non-fiction book and film, Hidden Figures . Before calculators and computers, the space program relied on people, called “computers” — people who did the computing — to check and re-check the required calculations to get to the moon. With the rise of the “IBM”, those jobs were in jeopardy. But, of course, the IBM computer needed people who knew how to work the machines, so it was a logical shift that the people-computers at NASA flowed into those new positions, programming and maintaining the machines. In the film, these were the African-American women who had before been performing calculations by hand. (Side note: in the early days of computers, I don’t think there was nearly the gender divide in computer programmers that there is now. I’m not sure what happened in the 80s to make that change, but that’s a topic for another day.) The Industrial Revolution, broken down; we’re now transitioning to the fourth wave With every innovation, there comes work that is suddenly easier to do, leaving the workers to have the time and resources to do something else — whether it be making art, philosophizing, nation-building, making more offspring, or focusing on the next innovation to come. Yes, workers’ work changes because of the innovation. But without that innovation, the world would become a stagnant place. Besides, I would also argue that it goes against human nature to accept things as they are, rather than as they could be. What makes us think that the innovation of automation today will be any different than the rest of human history? I can’t wait to see what the next wave will bring. Yes, there may be some growing pains. Ultimately, though, I think it will be for the best. —Meera
↧
Spectre/Meltdown & What It Means for Future Design 2
I gave an introduction to speculative execution and the vulnerabilities that have come to light this year in yesterday's post Spectre/Meltdown & What It Means for Future Design 1 . There were 4 panelists at Hot Chips, chaired by Partha Ranganathan of Google. Each panelist gave a brief introduction, and then they got together as a panel and took questions from the audience. John Hennessy, currently Chairman of Alphabet (Google), but one of the inventors of RISC (for which he just shared this year's Turing Award). Paul Turner of Google. Google's Project Zero is one of the groups that discovered these vulnerabilities, and Paul was part of the group tasked with mitigation. Jon Masters of Red Hat, the person responsible for fixing up Red Hat Linux as well as is feasible. Mark Hill of the University of Wisconsin at Madison and also on sabbatical at Google. John Hennessy: The Era of Security John kicked off the session pointing out how much the world has changed. There is a lot more personal information online (so we all care more about security). Cloud servers mean that strangers, and even people we might consider adversaries, are sharing the same hardware. Meanwhile the bad guys are getting badder: state actors and cybercriminals are getting more organized and technically adept. Although most attacks are software-based, hardware is now entering the picture. He gave a brief tutorial on how Spectre and Meltdown work (like mine yesterday). He also talked about NetSpectre, which I hadn't heard of, that allows you to exploit the Spectre v1 hole without running any code, breaking in from a remote machine. It's not a very effective attack, only leaking about 1 bit per minute, but the attack is completely remote. The big challenge is we can't allow hardware flaws, no matter how much performance could be gained. But it is hard to fix the current flaws and the fixes may cost more than is gained by the hardware optimization. Even next-generation Intel processors probably won't fix Spectre v1 (the hardest of the vulnerabilities to address). His mea culpa: Lots of us missed this problem for about 10-15 years. Paul Turner: The Project Zero Journey Project Zero is an internal security team founded in 2014 with the goal of reducing the harm caused by attacks on the Internet, with a particular focus on "zero days", which are vulnerabilities that are not known about until the day (day 0) that an adversary attacks using them. Last year, Jann Horn, one of the researchers on Project Zero, discovered this new class of speculative vulnerabilities and, in Google, they became known as SpeckHammer (I think that is a play on speculative execution, and RowHammer, another hardware vulnerability in DRAMs, which is not today's topic). Paul talked about the numbers that I covered in my post, the Numbers Everyone Should Know . The CPU tries to hide the big number, the 100ns access to main memory, using caches and speculation. It is very effective, with a low number of cycles per instruction (much less than 1). The flaw in all of this is the assumption that mis-predicted branches have no side-effects. By the definition of the ISA, that is true. But we now know that there this is not true when we look at the bigger picture. Paul ran through the variants of Spectre, and some of the approaches to mitigation. That's a level too much detail for this post. I'll just point out that his slide for "What about Spectre Variant 1" was blank. There is one attack that nobody has a clue how to prevent without giving up all the gains that come from speculation. Jon Masters: Exploiting Modern μArchitectures: Software Implications Next up was Jon Masters of Red Hat. One of the big problems, he said, was that hardware and software people don't talk. In the very old days, pre-IBM/360, there was a much greater understanding (and hardware was simpler). But in the ISA era, there was no clear contract between hardware and software. Programmers assumed sequential execution, which involved various assumptions that were never explicitly clarified. Then we built more layers on top. It is even worse today, since programming has become much more abstract (Python, Go, Ruby, etc) and many programmers don't even know what a stack or a branch is. Speculation was treated as a magic black box, and the gains were so impressive nobody looked under the hood much. The average programmer has no idea about speculation and out-of-order execution, or branch prediction. Harold McMillan got re-elected as Britain's Prime Minister in the late 1950s with the catchphrase "You've never had it so good." Jon said something similar: We are too used to how good we have had it. Jon's summary: The “us” vs “them” became so ingrained we forgot how to collaborate Most programmers negatively care about hardware, which is seen as a boring commodity Software architects and hardware microarchitects don’t talk ahead of implementing new features, but instead build their view of the world and (maybe) reconcile it afterward Previous vertical system model gave way to separate hw/sw companies Hardware folks design processors (and interconnects, and other platform pieces) Platform-level capability was gradually eroded from outside processor vendors The focus on security has actually been a positive from this perspective Renaissance in computer architecture brings us a new hope Increasing need to understand a vertical stack from hardware to software Focus on security has proven the need to understand how hardware works Tomorrow Tomorrow, I'll wrap up this important and fascinating session with Mark's presentation, and then the discussion that followed. Sign up for Sunday Brunch, the weekly Breakfast Bytes email.
↧