Between about 2000 and 2010, there were a number of startup companies that attempted to commercialize the idea of virtual platforms: Axys Design Automation, Virtio, VaST Systems Technology, Virtutech, CoWare, Imperas, Carbon Design Systems. In some strange alignment of the planets, Cadence seems to have rolled up the guilty, those of us that ran marketing in those companies and tried to take the technology mainstream. Frank Schirrmeister was at Axys and Imperas, Larry Melling was at Virtio, I was at VaST and Virtutech, Pete Hardee was at CoWare. Now we are all at Cadence. The companies themselves have mostly gone. Axys was acquired by ARM. Virtio, VaST and CoWare were acquired by Synopsys. Virtutech was acquired by the Wind River group of Intel. Just last week ARM finally announced that they had acquired Carbon Design Systems (who in the meantime had acquired the Axys technology back from ARM, too). I heard about the acquisition some time ago but it was finally officially announced just before the recent earnings call. I think that leaves Imperas as the remaining company out there that is still independent. What is a virtual platform? It is a model of the electronic system, not necessarily a chip. It could be something as large as a cellular basestation, a router or a cell phone. The precise level of the model varied a little between the different companies but the key technology was the capability to run software on the model and, thus, to be able to use it for software development. When I say "run software," I mean take the actual binary image of the software and run it on a PC. A PC is, of course, an Intel x86 architecture, but the binary image was typically for an ARM processor, a PowerPC, MIPS or some esoteric microprocessor beloved by the automotive industry that you have probably never heard of. Simply interpreting the code would be too slow, so usually some JIT (just-in-time) compilation approach was used, similar to what happens to Java bytecodes in good implementations. When the code is first encountered, the ARM (or whatever) binary instructions are compiled into a block of x86 instructions and executed. Each time that block is re-encountered the x86 instructions can be run, the compilation step only needs to be done once. Since so much operating system boot code is run only once, the JIT compilation might be replaced with straight interpretation the first couple of times until it becomes clearer that the code is going to be reused and the cost of compilation is justified. The performance could be very impressive. In fact, since the processors were often in embedded devices with modest clock frequencies, often the PC would run the code faster than it would run on the actual system. The modeling was accurate, operating systems like Linux, Microsoft CE, Android, Wind River's VxWorks, or Cisco's IOS router software would just boot and run normally. It was so fast and so accurate it was close to magic. I remember watching a demo we ran at Cisco where we booted a virtual router. At the command prompt, a skeptical Cisco engineer asked us to type some weird diagnostic command that none of us had ever heard of before. The system instantly replied with all the register contents that the engineer would have seen on a real router. He found it hard to believe. Cisco's IOS, at the time, was 25 million lines of code. None of us had even seen the source code. The virtual platform had just the binary which, as you might imagine, was large. That engineer suddenly realized that he could develop code for the next-generation router, test and debug it, before the router design was even complete, let alone in manufacture. So why weren't virtual platforms more successful back then? In a word, modeling. The CPUs, which is where the clever technology was required, turned out to be the simple part of the system to model since there were only a handful of important instruction set architectures in use, so it was not hard to cover them all. But the other peripherals needed to be modeled, too. On one side those models consisted of the device registers on the peripheral bus, read and written by the processor. The other was sometimes a behavioral model of an actual circuit (often a block of RTL or a standard component) or a debug interface used for running tests. For example, a keypad would typically be modeled with a file containing keypresses (although a graphic keypad you could click on with a mouse was always great for demos). The challenge was that those models needed to be created. The RTL model itself, when it existed, was orders of magnitude too slow. So all the virtual platform companies had ways to create models efficiently. Virtio had a graphical approach called Magic-C, Virtutech had a device modeling language DML and so on. Since one of the big attractions of the virtual platform approach was to be able to start software development before hardware design was complete, the time taken to do modeling reduced that attraction, and, of course, cost money. Also, since hardware design wasn't complete, it would be unstable, meaning that the behavioral model of RTL blocks would often need regular updating to keep in sync. Creating the models was a barrier too far. Virtual platforms are finally starting to get much stronger traction in the marketplace. The reason turns out to be emulation. One way of getting a very fast model from RTL is to write a special behavioral-level model, but another is to use emulation to run the RTL really fast. This has turned out to be the "killer app" for virtual platforms. Run the processor and its software load using the processor JIT modeling technology, and run RTL models on an emulator. This is attractive for both software debug and also debugging the RTL itself under real-world conditions with the real software that will eventually run on the system. There is also flexibility to run some peripherals as virtual platform models and some on the emulator. But emulation makes the problem of keeping the model synchronized with the RTL go away. Just use the real RTL. So back when we were all VPs of marketing in those virtual platform startups, we were almost doomed to failure since we didn't have, and there was no way we were ever going to have, emulation. Doing the modeling by hand worked fine for people who were really committed to the technology, but we never managed to "cross the chasm" to the mainstream. Emulation is the bridge over the chasm.
↧