Cadence India organized the 10th edition of the Cadence University Program’s flagship initiative - Cadence Design Contest, in 2015. Launched in 2006, the Cadence Design Contest provides engineering students with an opportunity to showcase their talent in electronic design. The Cadence Design Contest is held for two categories: Master’s and Bachelor’s, and is open for all colleges enrolled as part of the Cadence India University Program. A winner and runner up are selected from each category. This year, the contest attracted 121 abstracts from the best of electronic design talent in academia. Forty-one colleges spread throughout India participated. There were 76 abstracts for the Bachelor’s Category and 45 for the Master’s. Projects are judged by an expert committee consisting of Cadence and industry experts on criteria including inventiveness, complexity, feasibility, breadth of design, effective tool usage, and presentation. The expert committee was presided by Mr. Padmanabhan, Founder – InnovationHub Technologies, Mr. Mrinal Das, V.P. Engineering - Sankalp Semiconductor, and from Cadence, Dr. S. Rajesh, AE Director, and Mr. Chayan Majumder, Sr. Principal Product Engineer. The judges were overwhelmed with the design talent on display and had a tough time choosing the best in each category. The winner in the Master’s category was Aditya Chowdary, a student of Indian Institute of Science in Bangalore, under the mentorship of Prof. Gaurab Banerjee. The topic was “Design of X-Band FMCW Radar Transceiver In 130nm CMOS Technology”, and the application was for a system that can help stop speeding vehicles as they approach an obstacle. The winner in the Bachelor’s category was the team of Deepak Kumar Tripathy and Gyana Ranjan Panigrahy, from National Institute of Science and Technology in Berhampur. The topic was “Automated Parking System Design using Cadence Solutions”, under the mentorship of Prof. (Dr.) Ajit Kumar Panda. Not only does the contest provide an opportunity to students to showcase their innovation and design talent, it is also a great platform to interact with Industry experts and get recognition from Industry. The fact that the winners went home with an iPad Mini each, and the guides with iPad Air, adds the icing to the cake. Anton Klotz
↧
10th Cadence Design Contest 2015 Successfully Organized in India
↧
Applications Down to Transistors: System Design Enablement
Last year Dan Nenni and I wrote a book on the semiconductor industry through the ages called Fabless . Actually I did most of the writing and I think Dan thought he had it easy. He just had to get contributed sections from all the companies in the industry. Everyone was eager to do it but nobody was eager to make it a high priority, so getting those chapters was harder work than my getting my part done. My sort-of-predecessor here at Cadence, Richard Goering, wrote the Cadence section. The subtitle of the book was The Transformation of the Semiconductor Industry . The link above will take you to Amazon but unless you really really want to pay for a printed copy (go for it, I get two bucks or something), you can get a free PDF version on Semiwiki . You can also get a free PDF version of my earlier book, EDAgraffiti at the same link (the book that Wally Rhines told me was the best book on EDA, except I pointed out that it is probably the only book on EDA). If there is one theme that runs through both books, it is that economics drives the semiconductor industry. Moore's Law is a law about economics as much as a law about technology. As the economics change, the structure of the industry changes to reflect that. We are in the midst of another change, I believe. The way system design is done changes roughly every ten years. Since the main driver of these transitions is changes in the economics of semiconductor manufacture, that is five process nodes or so. The boundary between what is done in the system companies and what is done in the semiconductor companies moves. In the 1970s, all the knowledge about how to design semiconductors was in the semiconductor companies. This was the era of the "tall thin man", people who understood everything from process technology to layout and simulation. But process technology moved on. The publication of Mead and Conway's seminal book Introduction to VLSI Design created a cohort of designers who didn't understand process technology in depth, and processes go so complex that they had their own specialists in what was known as TD, technology development, who knew nothing about design. Computer scientists suddenly learning how to design chips rapidly produced first primitive EDA tools and eventually an entire industry. The dominant methodology for system design was ASIC (as an aside, at VLSI Technology we tried to call it CSIC since designs are actually specific to customers not applications but ASIC was a lot easier to pronounce). The part of the design close to the system, front-end design, was done in the system company. The part of the design closest to the process, physical design, was done at the semiconductor company. Netlist went one way and back-annotation went the other. The next step was that the system companies resented being dependent on the semiconductor companies and wanted to get more control of their destiny. They wanted to go all the way to layout themselves. This was partially a result of the EDA industry maturing enough that system companies could get the tools, and also that the IP business started to come into existence supplying basic silicon structures such as standard cells and memories. But especially the beginnings of standalone foundries. This style of business was known as COT, or customer-owned tooling. The word tooling is semiconductor jargon for the masks and, in ASIC, the owner of those was the semiconductor company, not the customer. With the beginnings of foundries and the fabless ecosystem, the focus of the book I mentioned at the beginning, the customer had complete control. But semiconductor design became more and more difficult and gradually system companies, especially the cell-phone companies experiencing explosive growth, let design move back to the semiconductor companies. For example, Nokia even sold a semiconductor design team to what is now ST Microelectronics. This worked well for a while but, in the last few years, system companies realized that they couldn’t have differentiated products if they just wrote software to run on the same semiconductor products also available to their competitors. The entire system needed to be designed concurrently, all the way from application software down to the transistors. This application-driven development process Cadence calls System Design Enablement (SDE). This is a big change. No longer do system companies just do software, and semiconductor companies just do semiconductors. Change are taking place in both directions, with semiconductor companies creating entire software stacks. One reason is that it is too difficult to verify the silicon without the software. GPU companies cannot sell chips without the graphics software running right. Nobody in their right minds is going to tape out a chip for an Android phone without first booting the operating system and probably more. But booting Android is measured in the billions of simulation vectors. This is one of the reasons that emulation is a fast-growing part of the EDA market. Emulators are expensive. No marketing slide from Cadence (or its competitors) ever says that directly. But they are also the cheapest verification vector you can buy—it's a sort of quantity discount—but you had better need trillions of vectors (or what is next, quadrillions?) and increasingly you do. I like to say that the Palladium Z1 enterprise emulation platform is an enterprise server farm in a rack, only faster. It's not totally accurate, tiny simulations that run for only a minute are not worth the hassle, but it's close. But even so, system companies do not want to depend on getting everything from a semiconductor company that is also available to their competitors. As a result, system companies have been moving deep into semiconductor design, hiring semiconductor teams so that the entire system can be optimized together to produce a stack of software/silicon not available to their competitors. It is notable, I think, that all the market leaders in the smartphone industry design their own application processors in-house. Me-too is no longer good enough. So system companies are becoming more like semiconductor companies, and vice versa. Everyone needs to design from application software down to the silicon. Irrelevant P.S.: Rereading this I just noticed that the spellchecker in our CMS doesn't recognize the word "enablement" and proposes to make it "disablement" which is not quite the same thing at all! At least it didn't change it automatically. Another pick is "ennoblement". Arise Sir Chip.
↧
↧
Face Detection, Virtual Surround Sound, and USB Type-C Demos at CES 2016
Going to the Consumer Electronics Show (CES) January 6-9? Carve some time out of your schedule to meet with Cadence’s technical experts, who will be on hand to share with you a variety of demos: Dolby Audio Premium on the Tensilica HiFi DSP Audio Weaver from DSP Concepts for Tensilica HiFi DSPs Realtek sensor fusion for the Tensilica Fusion DSP Face detection using the Tensilica Vision P5 DSP People detection using the Tensilica Vision P5 DSP IP subsystem for USB Type-C and USB Power Delivery The Cadence team will be at the Las Vegas Convention Center, South Hall 2, Meeting Place, Suite MP25677. Book an appointment by sending an email to events@cadence.com . To learn more about these technologies, take a look at the following resources: Tensilica Fusion DSP white paper Tensilica Vision P5 white paper USB Type-C IP subsystem white paper To whet your appetite a little, here's a block diagram depicting the IP subsystem for USB Type-C: Christine Young
↧
Cadence Tech Days at ITMO and MIET
Cadence Academic Network organizes TechDays in Russia to promote leading-edge technologies and methodologies at universities and to foster contacts to the local aligned companies. We know that students either work already at these companies, or will be recruited by them once they have completed their studies. By presenting the latest state-of-the-art tools and flows, they will be enabled to master these new design capabilities at their future work place. This time the Tech Days were held at the Russian universities of ITMO , one of the leading higher education and research institutions, specializing in Information Technology, Optical Design and Engineering, and MIET , an advanced university training professionals in microelectronics, nanotechnology in electronics, information and telecommunication technologies, fundamental sciences, and more. The Tech Days were split into two parts.The 1st part was dedicated to Cadence solutions presented by Cadence employees. During the 2nd part, several Cadence customers presented their experience using Cadence tools. The Cadence team consisted of Jens Stellmacher, Feodor Merkelov, Denis Orlov, and Anton Borovich. The following agenda was presented: “Cadence: Virtual Model Prototyping”, “Cadence: High Level Synthesis” (Jens Stellmacher) “Cadence new software: Genus - latest solutions in the field of digital synthesis” (Fedor Merkelov) “Cadence new software: Innovus - modern level of digital devices design” (Denis Orlov) "Experience of using Cadence software” (Yuri Zavalin, Deputy Director at MRI Progress) "Experience of using Cadence Incisive to create and debug tests SoC system level" (Fedor Putrya, Group Leader Verification team, at ELVEES). Anton Klotz
↧
Why Agile Software Methodologies Can Improve the Chip Design Process
UC Berkeley Professor Borivoje Nikolic sees agile software methodologies as an answer to infusing the chip design process with greater efficiency. “Twenty years ago, technology people had fun making fun of ITRS predictions,” said Nikolic during a keynote talk at Cadence’s Mixed-Signal Technology Summit in October in San Jose. “Nowadays, they have a hard time making the cell as small as predicted. As a result, we have lost one technology generation of scaling over the past couple of years.” UC Berkeley Professor Bora Nikolic addresses an audience at Cadence's Mixed-Signal Summit in October. Scaling is slowing down for a few reasons. For one thing, it’s hard to scale SRAM, which occupies half of a typical die, Nikolic noted. While some might see the end of scaling as a problem, Nikolic sees an opportunity to streamline the development process for complex SoCs—and to make these chips more energy efficient to continue generating performance gains. Performance Limitations Emerge Transistor scaling led to performance increases over time until the early 2000s. That's when engineers hit power limits and frequency also flattened. They could no longer generate more performance in traditional ways. So, engineers began adding more cores in parallel, but today, says Nikolic, we are seeing saturation. Parallelism, he said, was a one-time gain. What can be done to address the performance limitations? There are energy efficiency gains from using simpler cores or running more cores at lower Vdd/frequency. “But we can’t have simpler general-purpose microarchitectures, and there are limits to supply scaling and how low we can go with a supply and how efficient a core can be, “ said Nikolic. ASICs were considered an answer, but with NREs up to $100M, long development cycles requiring multiple revisions, and low reuse, the number of ASICs starts have fallen, noted Nikolic. Enhancing Energy Efficiency of Designs At the UC Berkeley engineering department, researchers have tried to find commonalities among many application domains. Explained Nikolic, “There’s a certain set of motifs, relatively limited, that need to be implemented efficiently for these application domains to be efficient.” The department has identified 13 motifs, including finite state machines, circuits, graphical models, and matrix manipulations. Specialized processors would implement these domains, and mapping software would map them to a particular engine. Nikolic noted that this is an area under development. Bringing Agile Design Techniques to Hardware Development At Berkeley, Nikolic and his colleagues are promoting the idea of agile design , taking elements of an approach typically used in software development and applying them to the hardware world. Nikolic believes that by borrowing these characteristics from software development, chip design can be agile: Design a series of prototypes with small teams Use higher level descriptions with modern languages Design generators, not instances, which allows agile validation and enables reuse Treat open source as a friend, particularly when it comes to “standard” chip components Use rapid design flows Consider a typical SoC as an example. Nikolic explained of the agile process, “Let’s abstract the chip into a small set of blocks that are as simple as can be to validate the interfaces first and then build the smallest chip that we can. We can use software validation, but we can also spin that kind of chip and fabricate it if we like. “Every block would be built with a generator. The idea here is to validate all the interfaces of a tiny chip, validate the generators, then scale up the design using the generators.” During his keynote talk, Nikolic also shared some examples of tools developed by the Berkeley engineering department to facilitate agile chip design. One is Chisel, an open-source hardware construction language based on the Scala embedded language that features functional and object-oriented programming and enables construction of generators rather than instances. Another is Berkeley Analog Generator (BAG) , which codifies design intent to simplify the creation of analog/mixed-signal (AMS) circuit generators. “Efficiency is achieved through specialization and this specialization is the value that customers perceive,” said Nikolic. “That’s why they buy the chips.” Christine Young
↧
↧
Rob Aitken of ARM Research on System Design
I wrote yesterday of how there is a transition going on as system companies discover that they need to do their own semiconductor design if they are to have products that are differentiated from their competition. To control their destiny, they need to deal with everything from application software to transistors. At a presentation today, Rob Aitken of ARM Research made an almost identical point (although he went all the way down to process, which is probably a step too far for most system companies who have to live with what they get from their foundry). Integration is the key but not all integration approaches are equal. Architecture becomes a question of integrating everything in the right way. One of the major drivers of this need is that everything has a power envelope and there are different tradeoffs depending on the rest of the system architecture. Rob had a very interesting slide showing what you could use 100 picoJoules for. He made the very real point that everything costs energy and the whole system has to be optimized depending on what you want to achieve. Computation takes energy. Access to memory takes energy. Transmitting data takes energy. And obviously driving an electric car takes energy although it is interesting just how much is needed. It takes a lot of femto-meters to go a hundred miles. As silicon has scaled, the type of optimization possible has changed. In the days of single core processors, the main constraint was fitting everything on the chip and competition was about frequency. Next we entered the era of multi-core chips and the key constraint was power, and how much throughput was possible. Today, we have both multi-core processors and lots of specialized offload processors such as GPUs and audio processors. The measure is really now throughput/joule—otherwise thermal effects mean that we face the problem of "dark silicon" where we cannot fire up the whole chip at the same time. The big challenge is that there is an insatiable demand for more of everything. This is especially the case in the smartphone market where bandwidths and performance goes up a lot every year. Since the battery is not very large this is one of the most overconstrained design areas. But even in datacenters, which have huge amounts of power, or automotive where batteries are more forgiving, power still drives almost everything. Rob didn't use the phrase System Design Enablement but clearly he is talking about the same thing, optimizing the entire system concurrently rather than trying to build systems out of individually optimized building blocks, which leads to a suboptimal global solution.
↧
Using Constraints Generation When Designing Power-Constrained SoCs
If you’re designing SoCs, power is no doubt one of your top concerns—power scheduling, meeting power integrity targets, managing voltage drop, and other related challenges. While the solutions aren’t simple, there are emerging techniques that offer some promise. One such technique, according to University of Toronto Professor Farid N. Najm, is constraints generation—a rich area of study involving new ways of thinking about the power grid and power delivery. Najm, who chairs the university’s Department of Electrical and Computer Engineering, presented an overview of constraints generation and other power-management techniques during his talk on “Managing Design Challenges for Power-Constrained SoCs” at Cadence’s San Jose headquarters in August. “Power is a first-order concern, like timing,” said Najm, whose current research work is focused on power grid verification and optimization and on managing the impact of process and environmental variations. Why Are We Overdesigning the Power Grid? Sharing an overview of his current research, Najm provided a bit of background on the power landscape today. Nowadays, the power grid topology covers all levels of the metal stack, with some 500 million logic cells and their current sources, he noted. In a given chip, many blocks will have their own separate power supply that’s gated, occupying a certain number of metal layers and connected at the top by the global grid. In every layer, the grid is mostly a regular mesh. Many areas of study focus on power grid verification for voltage integrity, along with grid reliability and power scheduling. Power scheduling accounts for the workload that you can run in an SoC without violating power integrity targets. Looking ahead, noted Najm,“We may be able to develop a query engine so that the power controller can ask, Can I bring this up before I shut this down, or will there be a signal integrity problem?” In this landscape, the design challenges for power integrity revolve around keeping the power supply regulated, said Najm. Engineers need to watch out for things like voltage drop at the bottom layers and electromigration. “We need grid verification—early, incrementally, and at signoff. The catch is, you don’t know what the circuit is doing,” noted Najm. How are engineers addressing these challenges? Simulation is being used for specific scenarios, and there are tools for vectorless verification. However, Najm noted, both options offer limited coverage and optimistic results because they are simulation based. “The engineering solution from designers is to overdesign,” Najm said. “There’s a lot of pain now on the routing side. Designs take longer to implement because of limited silicon real estate due to overdesign on the grid.” How Constraints-Based Verification Can Help Said Najm: “There are no tools today to qualify the grid. Is it any good? Can it be improved? How much current does it tolerate? There’s no way to tell how much total current the early grid supports without causing power integrity problems, no way to tell what chip workload patterns are allowed by the candidate grid.” The dearth of viable solutions has led Najm to constraints-based verification. As Najm explained, in this approach, you’d generate circuit current constraints that, if satisfied by the underlying circuitry, would guarantee power grid safety. This approach would: Encapsulate useful information about the grid Provide power budgets to drive the design process Allow local checks for block compliance with grid safety constraints Provide constraints for power scheduling Power scheduling presents a potential killer app for constraints generation. The methodology could, for example, lead to a way for a chip’s power controller to check whether a candidate combination of blocks is safe to turn on, or if it this would violate grid voltage targets. “Constraints generation is possible and practical, and is a rich area of study that was previously unexplored,” said Najm. “It provides quality metrics for the power grid and a rigorous approach for early grid design and planning.” Christine Young
↧
Cadence Academic Network Presents at Khalifa Semiconductor Research Center
On Nov. 18 Dr. Patrick Haspel presented at Khalifa Semiconductor Research Center (KSRC) in United Arabic Emirates. The “Open House” event, which was organized by Dr. Mohammed Al-Mulla and Prof. Mohammed Ismail, who is Center Director, was intended to present KSRC to the other academia in the region. Beside Cadence's Patrick Haspel, talks were given by Prof. Ibrahim Elfadel from MASDAR Institute for Science and Technology and Farris Alhorr from National Instruments. In the afternoon session KSRC opened its labs for the visitors and presented its achievements in the fields of biomedical and RF engineering. Khalifa University is part of the ACE4S consortium, which consists of the leading UAE universities in microelectronics area, namely Khalifa University , MASDAR Institute , United Arab Emirates University , New York University Abu Dhabi and American University of Sharjah . Cadence Academic Network and KSRC have a long-term partnership, which is one of the strongest in the Gulf region. Beside the tools required for the research, Cadence provides tapeout support and methodology services to KSRC. Anton Klotz
↧
Use the Integrated Flow with US
A couple of years ago, it was clear that the Cadence implementation flow required a from-the-ground-up re-creation. Nobody likes to say their old tools were not as good as they could be, but that is obviously the case when the new ones are better. And in this case they are much better. The reality is that for the chips of that era they were fine. But chips continued to get larger, the number of corners required for analysis exploded, power had become a dominant requirement. Another big change was that design teams had access to large farms of servers and wanted to be able to leverage them to speed up working on these next-generation chips. A lot of things had changed and so a completely new approach was required: massively parallel, common engines, and re-written core engines for the modern era. Probably the biggest issue was that historically different tools used "estimates" of how the following tools in the flow would behave. After all, a good placer during synthesis is one that correlates with the actual placement. Good timing in physical design was one that did a good estimate of what timing closure needed. This approach worked acceptably on smaller designs on non-leading edge nodes, but not for the most demanding designs. The reality is that a good placement during synthesis is the one that uses the identical engine, and good timing during physical design can only be achieved with the actual timing closure engine. So the engineering team decided to bite the bullet and create a family of unified engines that would be used throughout the flow: Unified placement Best-in-class power, performance, and area (PPA) optimization Unified timing, power, and extract Unified clock tree synthesis (CTS) and global router Common data model And a zillion more features, notably it is being used for 10nm, and fully supports multiple patterning and FinFETs This solution would ensure the closest possible correlation as the design progressed between different stages. So the big changes were to build unified engines, make everything massively parallel, and create new core engines for best-in-class PPA at the end, the only QoR that really counts. The first of the tools to be released was the Tempus Timing Signoff Solution, for static timing signoff, then the Voltus IC Power Integrity Solution for EM/IR analysis, and the Quantus QRC Extraction Solution. Next was the main Innovus Implementation System with new versions of the GigaPlace Engine and GigaOpt Optimizer. Just before DAC, we announced the Genus Synthesis Solution and finally Joules RTL Power Solution (somehow an "le" slipped into the middle of the "us") for RTL-level power analysis. The result is: 10-20% better PPA Up to 10X turnround and capacity gain Full flow correlation leading to design convergence Reduced iterations and so earlier signoff The main flow really divides into two parts: Genus and Innovus, the implementation flow. The above benchmark is a 5M-instance 1GHz GPU. It shows not just how good the tools are individually, but that to get the full benefits of Innovus only happens when Genus is used for synthesis. And the suite of signoff tools that work with them. But all the engines are common across both groups. There is also Joules for RTL power analysis which doesn't quite fit into this taxonomy. The above diagram shows what is under the hood in each of the signoff tools. But don't forget that the entire flow is linked by the common engines and data. Of course it goes without saying that everything is color aware (for multiple patterning), supports FinFETs, and is already being used for designs in 10nm. It isn't just working from a technology point of view. It is working for getting better designs into real products. For example, of the four benchmarks in the above chart, two immediately converted to Cadence to get the improvements in reality, not just in an evaluation. Many, if not most, designs these days have mixed-signal components. The "US" digital flow is also linked to Cadence's Virtuoso custom layout environment through the openAccess database. Funnily, this is a project that I launched back in about 2001, with a project we called SuperChip. I thought it would take three years, but I think in the end it took more like a decade. At the time, we combined digital and custom/analog engineering into one organization and I took over marketing for both product lines. I have run engineering organizations and I know how difficult it is to pull together projects like this rewrite where there are so many moving parts. So buy the team a beer. That would have to be that famous stout, Guinus. The annual Cadence Implementation Flow Summit is tomorrow, December 10. I'll be your host for the day. I hope to see you there. Registration is here although it will close later in the day so we can print badges. You can still register on site though.
↧
↧
Whiteboard Wednesdays - Implementation of Multi-Link, Multi-Protocol PHY
↧
First Cadence Academic Network Workshop in Israel
On October 27, the Cadence Academic Network organized the 1st Cadence Academic Workshop in Israel. More than 20 professors as well as PhDs and master students from five major Israeli universities attended the workshop at Bar Ilan University. The first part was dedicated to the Cadence Academic Network and what it offers to academia, and the second part was about the Innovus and Tempus digital implementation and timing signoff flow, which Cadence introduces to universities, since we think that the students should have access to the latest industry-standard design software. Cadence engineers Shmuel Zagury and Maxim Krakov delivered the Innovus and Tempus part, which was very warmly received by the audience. At CDNLive Israel two academic papers were presented, one from Bar Ilan University and one from Tel Aviv University. Jenia Elkind from Tel Aviv University got the Best Paper Award in the Custom IC track, which underlines the high quality of academic research and education in Israel. Anton Klotz
↧
EUV Might Really Happen
I have been a skeptic about whether EUV was going to work. Just in case you have no idea what I'm talking about, EUV stands for extreme ultra-violet. For what seems like forever, we have been using 193nm wavelength light for lithography, eventually adding immersion so it often is called 193i. At 13.5nm wavelength, EUV would at least not need to be double patterned at 7nm. At one of the short courses on 5nm design at IEDM, one of the speakers was Anthony Yen of TSMC. He actually gave a tutorial on lithography and what terms like "numerical aperture" really mean and how you can prove the resolution formula. But for me the most interesting part was at the end when he summarized TSMC's current experience with EUV. I think it was a lightly updated version of his talk at SEMICON Taiwan a couple of months ago. He started with a slide from 1999 looking at future possibilities for future lithography. The only one left alive is EUV—the entire industry is all-in as they say in TV poker. EUV has four big outstanding problems, and Anthony went over them. They are: Source power (tied up closely with throughput) Defect free masks Pellicle or rather lack of it Availability (tied up closely with cost) The light source for EUV is produced by what seems, when you first hear about it, to have been designed by Rube Goldberg. Droplets of tin, 50,000 of them per second, are generated and fall. A laser is used to hit each one twice, once to shape the droplet of tin so that the next phase will be more effective. The second is a powerful blast of the laser that actually produces the light. Oh, and I forgot to mention, EUV is absorbed by almost everything so the entire system has to be in a full vacuum. The problem with the light source has been to get the power up enough. It is generally thought that it needs to be 250W for it to be acceptable in volume production. Things have been going well in this area as the chart below shows. The red triangles are results TSMC have achieved with the EUV stepper they have in Taiwan. The blue are results that ASML in the Netherlands have achieved with their generator. I said EUV is absorbed by everything. That includes lenses. So EUV has to have reflective optics. But EUV would be absorbed by the kind of mirror in your bathroom. Instead they have to be build up with alternating layers of silicon and molybdemum and rely on a phenomenon called Bragg reflection. Only about 60% of the light is reflected, so heat is an issue (the first mirror, for example, is absorbing about 40% of 250W). Also, since there are six mirrors (including the mask) only a few percent of the generated light actually hits the photoresist. The mask is a similar type of mirror, but patterned, obviously (see the diagram to the right). One issue with masks is that there is not yet a means to make them defect free. The number of defects has steadily fallen, and is now around 20. If the defects can all be identified and the mask-making equipment can position accurately enough, then it is hoped that it will be possible to hide the defects behind the mask in the areas where reflection is not required. But the biggest problem with the mask system at present is the lack of a pellicle. On a normal 193i mask, there is a cover on the mask so that if any contamination lands on it then it is not in the focal plane. EUV masks need the same thing otherwise a contaminant would sit on the mask wrecking wafer after wafer until the mask was cleaned. Worse, it might not show up when the wafers were inspected so a lot more useless processing would also be wasted. But finding a material for the pellicle is hard. Remember, EUV is absorbed by almost everything. The only material so far that has been considered a possibility is a polysilicon membrane. TSMC have made membranes large enough to cover the whole wafer. But testing them has been disappointing. The problem is that with the membrane 55nm thick, then only 85% of the light gets through which is considered unacceptable. They will need to make the pellicle thinner but at some point it must start to get too close to the focal plane to do its job. But when it all works, it works well. This is an especially difficult sort of pattern for lithography. On the left is the design double patterned using 193i. On the right is EUV. The little tiny fingers are the stress point where you can see the double patterning just doesn't have powerful enough OPC to do the job. So what about availability? That has always been the big worry, that when it works it works well, but the machine is very unreliable. TSMC has been running wafers and processed 15,000 wafers in four weeks. Tool availability was just over 70%. So are TSMC going to introduce EUV at 7nm? Anthony said he wasn't going to commit to that. "When it is ready we will introduce it."
↧
IEDM: the International Electron Devices Meeting
IEDM is a meeting held annually since 1955. Historically, it has alternated between Washington DC and San Francisco every other year. However, this year was the last year that the meeting was held in DC and for the foreseeable future (they have dates set until 2021) it will be in the San Francisco Hilton near Union Square (walking distance from my home so very convenient). Back when it started, it was mostly about vacuum tubes and only gradually did transistors and eventually integrated circuits creep in. Their own description of the meeting is pretty good: IEEE International Electron Devices Meeting (IEDM) is the world’s preeminent forum for reporting technological breakthroughs in the areas of semiconductor and electronic device technology, design, manufacturing, physics, and modeling. IEDM is the flagship conference for nanometer-scale CMOS transistor technology, advanced memory, displays, sensors, MEMS devices, novel quantum and nano-scale devices and phenomenology, optoelectronics, devices for power and energy harvesting, high-speed devices, as well as process technology and device modeling and simulation. The meeting is older than Moore's Law, celebrating its 50th anniversary this year. In fact it was at IEDM in 1975, ten years after Moore's Electronics Magazine article, that Gordon Moore showed how developments in technology had allowed his prediction to be realized, but he updated his prediction and said that the pace would slow to "a doubling every two years, rather than every year." On the Saturday, there are tutorials. I didn't arrive until that evening but I gather from a couple of people I know that the tutorials were excellent. The topics were: Electronic Control Systems for Quantum Computation Advanced CMOS Device Physics for 7nm and Beyond Thin Film Transistors for Displays and More Nanoscale III-V Compound Semiconductor MOSFETs for Logic RF and Analog Device Technologies Implantable MEMS and Microsystem for Neural Interface On the Sunday, there are two short courses that run all day: Emerging CMOS Technology at 5nm and Beyond (which I attended and will post about separately, especially the TSMC portion on lithography that included an update on the status of EUV there). Memory Technologies for Future Systems (notable for having the system requirements covered by Rob Aitken, DRAM by SK Hynix, and Flash by Samsung). Monday morning, there are keynotes. This year they were: Moore's Law at 50, Are We Planning for Retirement , by Greg Yeric of ARM Research. His answer was 'no' but I will put a post up next week with more detail. Quantum Computing in Silicon by Michelle Simmons of UNSW Australia. I have had quantum computing explained to me a couple of times and I still don't truly understand it. But then I think it was Richard Feynman who said that if you think you understand quantum mechanics then you don't. But some of the technology they have developed to insert single atoms of phosphorous into silicon in order to build single electron gates is impressive. Silicon for Prevention, Cure, and Care: A Technology Toolbox of Wearables at the Dawn of a New Health System by Chris Van Hoof of imec. I had actually already seen this presentation when imec flew me to Brussels to their annual technology forum earlier in the year. Then the meeting gets completely overwhelming. Monday afternoon through Wednesday, there are eight parallel streams each with about eight presentations per half day. So a total of over 300 presentations. If you are a specialist in some narrow area, it is pretty obvious which sessions to attend but if you are a generalist like me then it is hard to tell from just looking at the titles. In fact, IEDM know it is overwhelming and they have a press lunch on Monday where they highlight about a dozen papers that they consider to be the most significant. Hidden away in those streams are some invited special focus sessions. This year the five topics were: Beyond von Neumann Computing Silcon-Based Nano-Devices for Detection of Biomolecules and Cell Functions Advances in Wide Bandgap Devices Flexible Hybrid Electronics Layered 2D Materials and Devices: From Growth to Application These general sessions are interesting and comprehensible to someone who is not a researcher in the precise niche of the paper. The other papers vary in their level of accessibility. It seems that every other year the big semiconductor companies turn up and talk about their processes in detail, but in the years in between they are busy working in secret. For example, last year Intel, TSMC, and GF all presented process details including layer pitches, dielectrics, and more. To give you a flavor of the other papers, here are a few titles: NVM Neuromorphic Core with 64k-cell Phase Change Memory Synaptic Array with On-Chip Neuron Circuits for Continuous In-Situ Learning (this is largest neuromorphic core ever built by IBM, so the closest to a silicon brain yet) Collapse-Free High Power InAlGaN/GaN-HEMT with 3 W/mm at 96GHz (so, a collapse-free power device) 1Kbit FinFET Dielectric (FIND) RRAM in Pure 16nm FinFET CMOS Logic Process (so, FinFET-compatible RRAM) High-Density Optrode-Electrode Neural Probe Using SixNy Photonics for In Vivo Optogenetics (so, a biomedical sensor) ...and another 300 like these There is a Monday evening reception (this year with a laser light show since it is apparently the Year of Light). Tuesday lunch this year featured Pat Tang, VP of Product Integrity at Amazon Labs. Tuesday evening, there is a panel session, this year on Emerging Devices, Will they Solve the Bottleneck of CMOS . Wednesday is the entrepreneur lunch. I hope that I've managed to give you a flavor of the meeting and the breadth of the topics that are covered. There are about 1400 attendees, from all over the world. If you want to understand the future direction of semiconductor technology, then this is undoubtedly the best place to do so.
↧
↧
EDAC "Crossing the Chasm" with John Lee
EDAC's Emerging Companies Committee has been organizing evening seminars a couple of times a year in which Jim Hogan chats with someone with successful startup experience about what they did and what lessons they learned. Jim Hogan is today the principal of his own investment company, but he was one of the early employees of Cadence and a vice president in various roles for many years. Last week, the interviewee was John Lee. John is currently the GM & VP of the Apache business unit of ANSYS. But let's go back to the beginning. John was born in the US in Baltimore. His parents had immigrated from Korea to the US in 1959. In 1975 they decided to return to Korea. John was 9 years old and didn't speak any Korean, he had only known the US, so it was a shock, to say the least. Back then, Korea was a very poor country. He went to the local public school with 100 students per class. Since it took him some time to learn Korean, he focused on math—after all, the numbers are the same. He eventually transferred to a much smaller school where his mother taught, with just 28 in his graduating class. His math teacher told him that there is this place called Carnegie-Mellon University in Pittsburgh and he should apply. Since his family had no money (Korean salaries of the time were only about 20% of US), he would go anywhere he got a scholarship and, luckily, CMU gave him one. He was there as an undergraduate and then a graduate student with the legendary Ron Rohrer (I say legendary because he graduated from Berkeley with a Ph.D at the age of 21 and then was a thesis advisor to a guy called Aart de Geus at the age of 25). The group had eight students, many of whom are well-known names in EDA, including Charlie Huang, who just recently departed Cadence. John hadn't finished his Ph.D when Ron said, "Let's go start a company." So they created PSI with four grad students and four products doing complex simulations of ground planes, etc. They were 10 years too early, though. They didn't have any way to do circuit extraction, so it was just a technology company with no business plan. Meanwhile, another Ron company was ISS in North Carolina, which was working on DRC/LVS and starting to work on extraction. So he moved to NC and joined ISS (actually, I think ISS formally acquired PSI). So, the first lesson is not to create a technology without a plan as to how to monetize it. All of a sudden, ISS was bought by Avant! and so John was working for Gerry Hsu. Soon after that, the Arques offices of Avant! (it might still have been Arcsys) were raided by the FBI, but that is not the story for today. John said that he learned a lot of good things from Gerry. Firstly, R&D is king in an EDA company, and you must have an extreme product focus. Gerry was very business-focused (in fact, he is generally credited with inventing the time-based license that is almost universally used in EDA today). He was also a great student of culture and management. He moved to California and Andrew Yang became his boss (Avant! had acquired Andrew's company, Anagram). But they were losing every benchmark to Simplex (which would end up being acquired by Cadence) and Andrew told John to take two or three of the best engineers. There were all sorts of requirements from customers, but they were told to start very simple and to focus. Ignore the analog market. Focus on full-chip. TSMC only. Say 'no' to everything. So I guess that is a second lesson. By 2001 Synopsys had acquired Avant!. After a time there, he decided to leave to start his own company. The company was Mojave, doing physical verification in a new way. They had a big focus on starting the company IP clean, making sure that nothing came over from previous jobs. Obviously not source code, but even customer lists and other fuzzier stuff. After all, as John said, "if you are a startup and you get sued, you are dead." He'd seen what happened at Avant!, and Nassda was having its problems. The company was one of the first to notice that Linux computers were really cheap and there had to be a way to use a lot of them. DRC seemed like an embarrassingly parallel problem and should be easy to speed up. This was true, but naive. Andrew Yang and Andy Becholsteim didn't wait for a product plan, they liked the team and invested. They built the team up to nine people. Charlie Huang told them something that they didn't listen to, "if you guys are not Calibre-compatible you have no chance." Their technology worked really well, but there was a big problem. There were no foundry rule sets because foundries only wrote rules for Calibre, so they had to create them all themselves. With no rule-sets, it was hard to acquire customers and, with no customers, the foundries were not going to create rule decks themselves. Magma acquired Mojave and they rolled out the product. Early results were good. But as soon as they posted their incredible speed numbers, the four-minute-mile effect kicked in. Once it was shown to be possible, the competition caught up rapidly. Design rules got more and more complex, and there were more and more of them. This meant that the incumbent DRCs could fairly easily use lots of cheap Linux PCs simply by running each rule on its own server rather than having to come up with new algorithms. So their speed advantage didn't last more than 18 months. Magma did not have enough resource for a drawn out battle, even though it might have made it. So in 2009, he left Magma. They looked at the big EDA companies and wondered how you would build technology like that if you were starting today, with today's tools, hardware, and open-source infrastructure. Chips will have 10 billion transistors, looks like a big data problem. He wondered, how come you can instantly look at any location in the world on your phone using Google Earth, but EDA needs huge slow databases and provides nothing close to the consumer experience we expect on our phones? John was also wiser, and they focused only on problems where they could get paid in six months. They started with EMIR analysis, since that was an area where customers had pain and would pay. They started in Austin, but it was too hard to recruit engineers that they could not pay for months. So they restarted in Campbell. They built everything on Amazon AWS.The company was Gear. Again, they had a big focus on setting the company up right and the IP clean with good business practices. The challenge is always to keep ahead. In EDA there are only a few groups open to evaluating new tools and so all the startups call on the same groups. There is a sort of 12-18 month rule, which is that it only takes the competition that long to come back with something "good enough." It is easy to create an initial lead, but it is really hard to sustain the lead. Gear was a big data company with the idea that you could get any information instantly, like on Google. ANSYS acquired Gear. Despite his spending nearly a decade in Pittsburgh and ANSYS being based there, he had never heard of them. They are a 45-year-old company with 48% margins and fast growth. They have $1B in revenue, but have a bigger market cap than Cadence or Synopsys. General finance rules. $30M is a good exit today, so you can't take more than about $7M. Lucio Lanza: "Take as little cash as you can and spend it only when necessary." The living dead rule: kill yourself when it becomes obvious that you will not succeed. So John's path went from CMU to PSI to ISS to Avant! to Synopsys to Mojave to Magma to Gear to ANSYS. WIld ride.
↧
Digital Implementation Summit: GLOBALFOUNDRIES Highlights Technologies to Accelerate Silicon Innovation
Without a doubt, there’s plenty of innovation happening in the electronics industry right now. The burning question is, how can engineers continue to accelerate this level of innovation while also achieving power, performance, and area (PPA) targets? That was the key topic addressed by Subramani Kengeri, VP of CMOS Platforms at GLOBALFOUNDRIES, on Thursday, Dec. 10, at Cadence’s Digital Implementation Summit. “We spend billions of dollars in R&D to get the very best technology from a PPA point of view, but if that is not translated into real product, then we have lost a big chunk of value,” Kengeri told attendees at Cadence San Jose headquarters. Indeed, scaling is continuing from a technical standpoint – we have visibility today into 5nm. But are we scaling at any cost, and are we doing so at the expense of energy efficiency? Subramani Kengeri from GLOBALFOUNDRIES addresses an audience at Cadence's Digital Implementation Summit on Dec. 10. 28nm a Sweet Spot for Now As usual, wafer costs continue to increase, particularly after 28nm, said Kengeri. In fact, he noted, increased costs from more complex processes are offsetting the benefits of die shrink, slowing down future scaling. “The ROI on any new product has to be revisited. 28nm will remain a sweet spot for a while,” he noted. How to reap benefits after 28nm? As engineering teams contemplate their next node, there are debates around FinFET and FD-SOI. Each solves a different market need. GLOBALFOUNDRIES has spent over 10 years researching both in parallel, ultimately siding with the industry and prioritizing the FinFET process. While it has consistently delivered better energy consumption, the performance of first-generation FD-SOI has simply lagged behind that of FinFETs. However, what about applications that are energy-conscious and don’t require the performance level of FinFETs? For this area, GLOBALFOUNDRIES has an answer in its 22FDX platform reference flow, the industry’s first 22nm FD-SOI technology. The 22FDX flow provides FinFET-like performance at ultra-low power consumption (0.4V operation) at costs similar to 28nm planar technologies. Some noteworthy data points on the flow: It provides 70% lower power compared to 28HKMG It results in a 20% smaller die than 28nm bulk planar It offers a lower die cost vs. FinFETs Flexibly Trade Off Between Performance and Power “This is going to enable innovation in interesting ways that haven’t been possible in the past,” Kengeri said of 22FDX. “Put this in the hands of creative designers, who will have additional knobs they can play with, and you can have wonders. That’s what we’re talking about [regarding] accelerating innovations.” The “knobs” that Kengeri referred to include these capabilities: Software-controlled transistor body biasing, which provides a flexible tradeoff between performance and power. You can basically apply different back biases to different modules and control them through software at the level of granularity needed. Integrated RF, so you can include RF function in your design cost effectively and with energy efficiency, rather than having to use a separate RF chip Post-silicon tuning, which allows you to selectively increase performance or lower power In November, Cadence announced that its digital and signoff tools are enabled for the 22FDX platform reference flow . Cadence also worked with GLOBALFOUNDRIES to develop a Process Design Kit (PDK) for the platform. “22FDX accelerates innovation across a very wide range of applications,” said Kengeri. “Typically, FinFET or previous [processes] are optimized for specific market segments. This technology allows you to extend the value across a very wide range of applications.” “22FDX is the right technology at the right time,” he told the summit audience. “Let’s lead the next wave of innovations together.” Check this page soon for online proceedings from our Digital Implementation Summit. You can already view presentations from our Front-End Design Summit, held on Dec. 2, from the same page. Christine Young
↧
What's Good About ADW’s Component Browser for Project Manager? The Secret's in the 16.6 Release!
The 16.6-2015 Allegro Design Workbench (ADW) release contains a significant enhancement that allows traditional Project Manager-based designs to use the same database enabled Component Browser used in ADW projects. Yes, the Component Browser can read the ADW database when editing projects in a Project Manager (non-ADW) flow. Some designers want to stay in a Project Manager flow using parts from the ADW library database. Librarians use Allegro Library Workbench to build parts, but not all designers use Allegro Design Workbench. Until the 16.6-2015 release, engineers had to adopt the ADW Flow Manager (in Design Workbench) to access the ADW library. The Component Browser in a Project Manager flow could not see the ADW library. In 16.6-2015, Project Manager flows can be configured to give the Component Browser access to the ADW library (an ADW license will be used). This provides: Faster part searches Quick Search capability Lifecycle and PPL data Shopping cart and lists Access to the richer ADW dataset on which to search and view Access to the component datasheet, using RMB on the part in the search results The cds.lib file can point to both the local design and the ADW database, however local design cells can only be added in offline mode. Note that the ADW Library and Symbol Revision Manager (LRM/SRM) utility will not function in this mode and that non-PTF properties (those only included in the ADW database) cannot be extracted into a BOM. Configuration is simple! In the project .cpm file, two entries are needed in a START_COMPBROWSER section: the ADW server URL and port, and a directive that enables or disables online mode: This section can be controlled through a SITE level project template like any other .cpm file setting. It also works in Allegro FPGA System Planner. Component Browser is now HotFix/ISR independent (this is only for clients used by design engineers – any update to the ADW server still requires an update to the clients used by librarians). Prior releases required the Cadence tools on the user’s machine and the ADW server to be at the same software revision (ISR). The ADW and SPB versions of the tools must still be compatible. I look forward to your feedback! Jerry “GenPart” Grzenia
↧
Whiteboard Wednesdays - Understanding the Computational Activity Behind Neural Networks
In this week's Whiteboard Wednesdays video, Chris Rowen discusses the inter-workings of neural networks, which are applied to a variety of data types for pattern recognition. Hear Chris explain what is happening computationally in the functioning of a neural network. (Please visit the site to view this video)
↧
↧
Congratulations Chris Rowen, for He's a Jolly Good (IEEE) Fellow
So the lede is that Chris Rowen has been elected an IEEE Fellow. In a sense it is actually a group award, because he couldn't have achieved what he has without the teams that surrounded him, in most cases teams for which he assembled the key players himself. It is a big deal. There is a cap of 0.1% of the membership who can become fellows in any year. Chris started out as a historian. When he went to Harvard he had to decide between history and physics and decided to do physics since it seemed like it would have more effect on the future. When he graduated, he wondered what to do so he decided to follow his father's footsteps going down the road from Harvard to MIT. He had also been admitted to Keble College Oxford (UK) to study the famous PPE course (philosophy, politics, and economics) that so many senior government ministers studied, including David Cameron, the present prime minister, although he didn't go (now this blog might have been very different if he had: Ten Downing Street instead of Ten Silica). After one semester he decided to look for a summer job in engineering. In one of these strange alignments of the planets, his Mom sent him a newspaper clipping of a company looking for people that in 1977 nobody had ever heard of. It was called Intel. He worked on 4K-bit DRAM (the state of the art, and don't forget back then that Intel was a memory company). When he graduated, he looked for semiconductor openings and worked for Intel in Oregon on SRAM. Since he'd never had a class in engineering (remember, he studied physics), he went to Stanford to do a master's degree. That became a Ph.D. He fell in with John Hennesy (now the president of Stanford, as it has turned out), who was doing interesting work on RISC architectures. He joined the MIPS project, wrote the first optimizing compiler, and did his dissertation research on synthesis and place & route. He continued to work one day a week at Intel (and, until they noticed, even continued to vest his stock!). He met his wife there. In 1984, Hennesy and two colleagues created MIPS (the company) along with Chris, who was technically a minor founder. He worked on both software and hardware (he designed the ALU and the register file) and also did verification of the IEEE floating-point unit. He then moved to the system side, building small workstations and servers. In 1992 Silicon Graphics acquired MIPS. In those days, workstation vendors thought they needed to own their own architecture (after all, HP, IBM, and Sun couldn't be wrong). But, unfortunately, SGI only wanted MIPS for their workstations, so they shut down its use in embedded, which eventually meant they were so far behind ARM they can never catch up. Unfortunately, SGI ran into its own problems and so MIPS followed SGI down even in the big iron category. MIPS was spun out as an independent company again (it would eventually be acquired by Imagination). Chris and his wife decided it was time for a change and so they moved to Neuchatel in Switzerland, where be was a sort of European CTO for MIPS. When he came back, he thought about a startup but was recruited to Synopsys to run their design re-use business. He concluded that the most interesting IP was processors. In that era, ARM was still private and small. He tried to persuade Synopsys to enter the processor business but their top 10 customers already had their own microprocessors. So he left Synopsys with a clean sheet of paper, no history, no technology, no income, and a computer in his living room. To the left is a group photo of the founding team in the global headquarters! They had the idea for reconfigurable microprocessors from day one. Harvey Jones (Daisy, Synopsys) and Chris himself from his MIPS exit, put in the money. They recruited the core team. Of course, this was Tensilica. It only took 9 months from the point that they had 10 people to shipping a product. Language, compiler, ISA, optimized implementation. The first customer was Silicon Spice (eventually bought by Broadcom for $2B). Also Zilog and Cisco. He thinks the company was successful for a couple of reasons. One, they solved a hard problem, so they had a differentiated product that was hard to compete with. And secondly, it is really, really hard to establish a new architecture. Look how much money Motorola and IBM spent to try and establish PowerPC (and they even had Apple as a customer). They were (and are) in a different market from ARM and Intel since configurable processors could go places other processors couldn't go and, luckily, it turned out to be a fast-growing market. Application know-how was also important, especially in audio where they became the defacto standard due to a mixture of low power and fast time to market when audio standards were all in flux. Eventually, Cadence acquired Tensilica, as you probably know. The IEEE Fellowship reflects being part of two important movements, RISC and configurable processors.
↧
Nibbles—Breakfast Bytes Predictions for 2016
Neils Bohr, the physicist (or should that be the quantum mechanic) famously said "Prediction is very difficult, especially about the future." The semiconductor industry, despite its incredible complexity, is easier to predict than many other industries because so much money is invested in making it predictable, and the ecosystem is so complicated that it has to all move together. This time of year is traditional for predicting the year to come, so here are a few of mine. System Design Enablement There will continue to be a move towards system companies taking control of their destiny and designing their own proprietary silicon, and semiconductor companies building entire software stacks. A key driver of this will be software-driven verification especially using emulators such as the recently introduced Palladium Z1 along with fast processor models. See my post earlier this month (and my post on Palladium Z1). 3D Packaging Finally Goes Mainstream I think that this year will be the one in which 3D packaging technologies finally go mainstream and enter volume production. At the high end, this has partially happened using silicon interposer and TSV technologies. But new consumer approaches are coming and my expectation is that hundreds of millions of designs will ship next year, creating a volume ramp that will take the technology mainstream. Check out my post about this technology. Designs Spread Across Nodes It is clear that for a number of reasons, many designs are not going to move to the most advanced FinFET processes any time soon. This means that the processes used for design will become a much richer menu. The most advanced design groups in the highest volume industries are already doing 10nm designs and they will start to ship. But old processes are being re-architected to have variants for lower power, for lower cost, and non-volatile memory, based on all the experience gained since they were originally introduced. And various flavors of FD-SOI will become important due to body bias. See my post from November. Automotive It will be big. The electronic content in cars has risen explosively. The structure of the existing automotive industry (with the exception of Tesla) reflects the hierarchy of the vehicle. But that hierarchy is going to change and the skills necessary, such as image recognition and video processing barely exists in the current OEMs and Tier 1s (that's what you and I would call car companies and their major suppliers). RS26262 and the requirement that automotive electronics can run self test regularly while the vehicle is in use will drive changes in how test is done, which will probably bleed out into other industries. See my recent post on Automotive Ethernet, for example. Democratization of Formal Verification Formal verification used to be done by PhD-level engineers using the engines directly. Now many use cases have been encapsulated as Apps, introducing formal as a complement to other verification approaches, in particular simulation. See my post on the Jasper User Group meeting. CES Will Be Full of USB Type-C Equipped Products Everything you read is that the USB Type-C connector is the fastest growing connectivity standard ever. I even wrote a post about it. There are even rumors that future iPhones might abandon the current Lightning connector for it and even remove the headphone jack, as they did to all the legacy connectors on the current Macbook Air. Test Test has always been the orphan step-child of EDA. But I think we will see an increased focus on it. One big driver for this is automotive as I mentioned above, but increasingly it is costing more to test the die than to manufacture it. I expect to see a big focus on getting the cost down and coverage up.
↧
Word from the Source—USB-IF on What USB-IF Is and What’s New in USB (Jeff Ravencraft Interview - Part 1)
In case you don’t know, USB Implementers Forum (USB-IF, for short) is the organization behind all things USB. Its president and COO, Jeff Ravencraft, sat down in the Cadence video studio and answered our questions about the role of USB-IF in the industry and what’s cooking for the USB implementers and users. With so many new possibilities that are opening for USB developers, Jeff did his best to keep it short, but still we needed to split the interview into three parts. Today we’re presenting to you the first one. Enjoy the video!
↧