A session on pre-silicon software development platforms at the Electronic Design Process Symposium (EDPS 2014) in Monterey, California on April 17 revealed the incredible versatility of these platforms. Speakers noted that RTL emulation has various use models, virtual platforms can be useful throughout the development flow, and the integration of development engines—such as in a "hybrid" mode that combines virtual platform models with emulation—may be where the real value lies.
The session included the following presentations:
- Shantanu Ganguly, senior vice-president of engineering at Synopsys, on "Validation Strategies with Pre-Silicon Platforms"
- Frank Schirrmeister, product marketing group director at Cadence, on "Combining TLM and RTL Techniques: A Silver Bullet for Pre-Silicon HW/SW Integration"
- Victoria Mitchell, director of SoC software engineering at Altera, on "An Approach to Verification of Many-Core Systems Using the Virtual Platform"
(A fourth presentation by Kumaraswamy Namburu of NetApp focused on data storage and management, and is not reported here).
Frank Schirrmeister of Cadence speaks about pre-silicon platforms at EDPS 2014
The session followed a keynote speech by Chris Lawless, director of pre-silicon platform acceleration at Intel, who discussed how and why his company uses pre-silicon prototyping platforms and what Intel expects from providers. He called for a more integrated development suite in which different engines can use the same models, test benches, and user interfaces. A recent blog post reviewed his keynote.
Hardware-Assisted Validation Strategies
The Lawless keynote discussed "shift left," the idea that tasks typically done post-silicon—such as hardware/software co-validation—should be brought into the design cycle much earlier. In his presentation, Shantanu Ganguly looked at ways to achieve "shift left." Requirements include faster emulation, faster simulation, pre-silicon debug, earlier hardware/software bring-up, and static and formal verification.
Ganguly considered the pros and cons of emulators based on custom FPGAs, custom processors, and commercial FPGAs. Commercial FPGAs provide slower compile times than the other two options, and some probe changes require re-compilation, unlike custom FPGAs and custom processors. But the advantage of a commercial FPGA, he said, is that "you get the R&D budget of some other company for free."
Ganguly also summarized and compared five common hardware-assisted verification modes:
- Simulation (never fast enough)
- In-circuit emulation (uses rate adapters and boards, very interesting but "also very hard to do")
- Embedded testbench (where testbench runs in the emulator hardware)
- Co-simulation at signal level ("the poor man's solution to getting some level of acceleration")
- Transaction-based verification (runs virtual devices on the host instead of rate adapters and boards)
Finally, Ganguly talked about the need for "simulator-like" debug in conjunction with emulation.
Looking for the Silver Bullet
Frank Schirrmeister talked about a range of hardware/software co-development platforms, including software development kits (SDKs), virtual platforms, simulators, emulators/accelerators, and FPGA-based prototypes. "My key takeaway is that the silver bullet is really about the combinations of the engines," he said. "No one engine fits all. We need to work towards a consistent environment across the different engines, and all the engines underneath would ideally be invisible."
A chip by itself is meaningless without software, Schirrmeister said. There are many different types of software—applications, middleware, OS, drivers, firmware—and companies that design chips are increasingly expected to supply software as well as hardware. "You need to find a way to execute the software as early as possible in the development flow," Schirrmeister said.
The EDA industry has long been interested in expanding into embedded software, and with semiconductor providers expected to provide software—and some 2 million apps under development—it sounds like a good target. But embedded software is in fact "somewhat dangerous," Schirrmeister said, "because it has very different dynamics [from EDA]. There is lots of open source, don't dare ask for money, and you have to get your money sideways."
The semiconductor companies who now have to write software probably aren't getting paid for it, he said. Even so, they are moving up the "value stack" as shown in the diagram below, and they are expecting their EDA suppliers to follow and enable that requirement.
Who develops what? (Frank Schirrmeister presentation, EDPS 2014)
Development platforms, Schirrmeister said, range from SDKs that are completely abstracted from the hardware to prototyping boards with the actual chip. He went into more detail with RTL simulation, virtual platforms, acceleration/emulation, and FPGA-based prototyping, showing where these fit into the development cycle and where their strengths lie.
Schirrmeister discussed how Broadcom boosted system bring-up times by using an "embedded testbench" with the Cadence Palladium emulator, mapping peripherals directly into the emulator box instead of using external boards and cables. (A blog post about that experience is located here, and you can find presentation slides here from the September 2013 System-to-Silicon Verification Summit). Schirrmeister also discussed a "hybrid" use model in which fast processor models run on a host workstation along with RTL in the emulator. This approach combines the advantages of virtual platforms and emulators.
Getting Full Value for Virtual Platforms
Virtual platforms can greatly facilitate pre-silicon software development, but don't throw them away after this early software development phase is complete! Victoria Mitchell showed how Altera is extending the life cycle of virtual platforms used for software development with many-core systems. This practice goes beyond "hardware/software co-design" to enable what she calls "complete system co-design."
Aside from leveraging what may be significant investment, a virtual platform can be reused as part of a testbench for software and system validation. "There is a unique capability of a virtual platform that doesn't exist if you're running real hardware," Mitchell said. "That's the ability to add instrumentation and to intercept simulation with hosted functions." This is a non-intrusive way to verify the full system without changing the design or the software.
To attain these capabilities, Altera uses instruction-set simulation provided by the M*SDK Advanced Multicore Software Development Kit from Imperas. Altera engineers make use of binary intercepts, which are provided through dynamically loadable modules or libraries. APIs are registered to events such as address execution or model enumeration. Intercept APIs can inspect memory, drop into simulation debug, change processor state, alter translation, or add and remove other API callbacks.
Mitchell compared two types of virtual platform setups. The "direct verification" approach takes Altera Nios code and changes it so engineers can dump instructions out to a console using printf statements. This, she said, easily becomes a "configuration nightmare." The other setup uses intercept libraries, and replaces all the extra software that engineers previously had to write with two function calls.
While Altera's use of intercept libraries is new, feedback is already positive, Mitchell said. "Since software engineers don't have to write test code, we actually start writing what will be production-level code from day one," she said. "That means that by the time we tape out we will have not only booted the OS, but also tested and validated the production level code and the embedded firmware."
And that's a lot of added value for what Mitchell called a "negligible extra effort."
Presentations are located at the EDPS web site.
Richard Goering
Related Blog Posts
EDPS 2014 Keynote: What Intel Needs from Pre-Silicon Prototyping
Designer View: Embedded Palladium Testbench Speeds System Bring-Up
Palladium XP II—Two New Use Models for Hardware/Software Verification