System Design Enablement (SDE) is about designing optimal systems, focusing effort on areas that make a difference to the end customer. If you are reading this, then you already know that Cadence sells EDA software for digital design, for analog design, for the design of PCBs and packages, and has a portfolio of IP. Any SoC today has a software load, typically a large one. It is completely normal for this to range from a million lines of code (LOC) up to a hundred million. That doesn't even count the operating system (for example, Linux is 15M LOC). The S in SDE stands for system, but every SoC contains a large software component to go with the chip(s) and board(s). This leads to the obvious question, why doesn't Cadence supply tools for software developers? Don't they deserve the best possible tools, too? Open Source Software Development Tools Actually, software developers mostly do have the best possible tools. The best tools are all open source. Also, compared to IC design, software development is much more fragmented. We all use SystemVerilog (with some other stuff left around to support legacy blocks). Programmers use C, C++, Java, Python, JavaScript, Erlang, and more. To support that variety, there are a wide range of compilers and other tools, such as network analyzers or memory leak detection. There is also a whole plethora of libraries for things like network stacks, flash memory file systems, encryption, deep learning, and graphics. An IC designer is proud to be capitalized by $100K of design tools per engineer. A software engineer takes pride that they use only open source, at a cost of $0. Open-source software is not always free; when it gets complex enough, companies will pay something to make a problem go away, such as Linux builds. But the fact that the source is available caps the amount anyone will pay. Otherwise it becomes cheaper to put a team on the project. Casual users of Linux buy it from RedHat. Server companies like IBM and HP have teams that work on Linux. In fact, that is one of the main ways that Linux development gets done. When someone says Linux is "staffed by volunteers" that doesn't mean it is undergraduates doing it in their spare time. It is companies that need Linux but make their money selling hardware or applications, not on Linux itself. To some extent, open source has become a religion and some developers will not use any software that is not open source. This remains true even in the case that the developer is never going to look at the source. If Linux is 15M LOC then only someone with extensive experience is going to know which file to look in, let alone have the competence to understand the code. This has also ensured that only in very limited domains is there commercial software available. Why would anyone develop it if there is no money to be made? Most software does not have a lot of secrets that only a few people know. Pure Software built a fast-growing business on tools for software developers (interesting aside: it was founded by Reed Hastings who took his money after it was acquired and founded Netflix), but these days everyone seems to use Valgrind and other open-source tools. In the first internet boom in the late 1990s, it was hard to create a company that made money since another VC would immediately found a company to do the same thing and give the product away for free. In that era, eyeballs counted for a lot and dollars for a little. Of course, it turned out that many of those companies had no means to make the dollars once that became important. Software development is a bit like that. If a company produces a good tool for software developers, then other developers will clone it and create an open-source version. The exceptions to this general rule are interesting: I think it was Eric Raymond, author of The Cathedral and the Bazaar , who pointed out that there are no successful open-source games. By the time a game is successful, it is too late to clone it because the gaming world will have moved on by the time the clone is ready. However, there are lots of open-source libraries for games (for example, the physics engine in Angry Birds famously cost Rovio nothing). Software that requires certification is another area. There has been a group of people that developed open-source software for wireless routers, but the FCC has started hinting about clamping down on this since there is too much potential for interference. I think we can all agree we don't want the neighborhood software engineer, no matter how talented, experimenting with the AutoPilot software on his (yeah, it would be a guy) Tesla. Software that requires deep knowledge that is not widely available, such as encryption and security. There really are some secrets that very few software engineers understand. This really only applies to software that software developers use themselves, where they are their own customers. There are a couple of reasons that there is little open-source EDA software. One is the secret aspect, that there really are very few people in the world who know how to create a start-of-the-art static timing verifier or router. The other, which is a problem even for a company like Cadence, is that the developers are not their own customers. The software development teams have a working knowledge of chip design but are not expert chip designers. The chip design teams may be able to write C just fine, they learnt it in school, but they are not going to create their own tools. The knowledge required and the opportunity cost of doing that instead of chip design is too high. The result of this is that specifying the requirements is a really important part of the whole development process, as is having "teaching customers" who are prepared to work with the EDA companies during development. I have heard open-source advocates state in all seriousness that the project is already off the rails if it has a written specification since the spec will always be out of date. That may work fine for software for software developers, it doesn't even work for software for dental receptionists, let alone signal integrity experts. So software developers have access to a wide range of tools and libraries and can assemble a development environment that is state-of-the-art. Testing SoC Software However, for software that has to run on a chip, there is a missing piece. How does the software get tested? If the software is far removed from the hardware, then it is easy to test since something can be mocked up, or an older version of the hardware can be used. After all, it would be terrible programming practice to make application-level software depend unnecessarily on details of the hardware implementation. It almost guarantees that it will break the following year on the subsequent chip. The low-level software, however, needs to work closely with the hardware. In the most general sense of the word "model", there needs to be a model of the hardware that the software people can use. I spent a few years working for VaST and Virtutech selling virtual platform software to solve this problem. The model was a software model but that led to two problems. First, it took time and money to develop the model, since part of the value was to have a model before hardware was available. The shorter the gap between the model being available and the hardware being available, the less valuable it was. The second problem was that the hardware development was not finished while software development was underway, and so the model was always somewhat out of date. It's a pity that there is no automatic way of taking an RTL model and turning it into a really fast model for software development. Except there is. Emulation. It turns out that emulation like the Palladium Z1 platform is the missing link between chip design and developing the software. It takes input at RTL (perhaps the processors are modeled differently using virtual platform approaches) and can run fast. Not as fast as the hardware, not as fast as the software engineers would like (who ever said their simulator was fast enough, anyway) but fast enough for them to get their jobs done. Later in the development cycle, when the hardware design is largely complete and only occasional small changes are being made, then even more performance can be had by switching to an FPGA emulation system like the Protium platform (or even designing a custom FPGA board). Since both Palladium and Protium platforms take in the real RTL of the design, it is straightforward, if not painless, to keep everything synchronized. Emulators are big hardware boxes that are not cheap. But they provide a lot of cycles very cheaply, provided you want a lot. And you do want a lot, since even to boot an operating system like Linux requires billions of cycles. Remember, if a system is running at say 1.5GHz then to run the system for 10 seconds takes 15B cycles. How long does it take your smartphone to boot? That's a lot of cycles. The State-of-the-Art SoC Software Development Environment So for software developers, Cadence's SDE strategy really does deliver a state-of-the-art solution for software development teams: open-source software (development environments, compilers, debuggers, RTOSs, etc) to develop the code, and the Palladium and Protium platforms to test it and characterize it. Plus, obviously, all the tools you need to design the chips and the boards that make up the hardware aspect of the system. Addendum After I wrote this post, I listened to the latest a16z podcast, which happens to be on Monetizing Open Source (or All Enterprise Software) , looking at alternative ways of making a business from open source other than commoditizing an existing business at much lower cost (like Linux, MySQL, Android, etc.). In particular, successful enterprise software needs to catch a disruptive wave (like Oracle did with relational database). Also see Why There Will Never Be Another Red Hat , which was written by a partner at a16z.
↧