The opening morning of DARPA's Electronic Resurgence Initiative summit in San Francisco consisted of several workshops, called "What's Next Workshops". I picked the hardware security workshop since the DoD cares more about security than pretty much anyone, and so it seemed like there might be a lot the commercial world could learn from the military. The discussion was kicked off by three DARPA program managers. As a reminder, these are people working for DARPA, but seconded from industry or academia for periods of 3-5 years. They are very technical, and where they are challenged is how to get things done in the public sector, not that they don't understand technology. Just to give an example, here is part of the bio of the first speaker: Dr. Plaks was the top academic graduate from the USAF Academy in 1989 with bachelor degrees in physics and mathematics. He was a Hertz Fellow at the Massachusetts Institute of Technology and received a master’s in physics in 1991. After a break for military service, he finished his doctorate in physics as a Hertz Fellow at University of Nevada, Las Vegas in 2003. Ken Plaks Ken started off by pointing out something that came up repeatedly during the 3 days of the summit: DoD used to be 40% of semiconductor purchases, and now it is 1/250th, with a corresponding level of influence. Gordon Moore's original paper (where Moore's Law was conjectured from just 4 data points) also talked about what would be required beyond scaling (which obviously was expected to be a lot sooner than it has turned out to be). Those things were architecture, design, and materials. Well, Moore's Law scaling is not entirely dead, but it's certainly running a fever, so we are at that stage. Beyond all the threats that the commercial sector is concerned about, the military is also concerned about Trojan insertion in the hardware, and so has to consider the whole life cycle including manufacturing, distribution, and use. A big concern of commercial entities is overproduction (grey market chips), but the military is more concerned about ITAR, the bad guys getting the chips, or the chips being compromised. Ken's big question is how we can get trust through technology? How can the DoD leverage and enhance commercial practice to restore state-of-the-art electronics to their warfighters? In particular, what novel technology can we use to make threat insertion and activation more difficult and more easily detected? That falls under design time countermeasures, runtime countermeasures, in-foundry countermeasures, and encrypted logic. Linton Salmon For more background on Linton, see my post about his keynote at the RISC-V workshop last year on Open-Source IP in Government Electronics . He is also the program manager for SSITH, System Security Integrated through Hardware and Firmware. Linton pointed out that phishing is actually over 90% of breaches, but some do happen by hardware. He pointed out that we can't solve the problem completely with hardware, but it needs to be part of the solution. As he put it, "find a way that the hardware is not gullible." The goal of SSITH, in particular, is not just military needs since the military uses "a ton of commercial parts." Being the military, they have an acronym for that, COTS, which stands for Commercial-Of-The-Shelf. Linton characterized today's approach as "patch and pray." One aspect of SSITH is to limit the hardware to allowable states. In chip design we need to think beyond PPA to PPAS (where the S stands for security). But one challenge is the security today has no metrics, so it is hard to monetize the idea of how much PPA you will give up for how much S, in the same way as we trade off, say, lower performance for power saving. Walter Weiss Airplanes and aircraft carriers may be around for 30-40 years, so an adversary may have a decade to inspect the system, perhaps discovering zero-days. Then they may compromise it, and we might not even find out about it. Walter pointed to Netflix's "Chaos Monkey" as something that the DoD doesn't do since it considers their systems too fragile (or important) to turn off, and there are just too many single points of failure. In case you have never heard of Chaos Monkey, here is a quick explanation. Netflix runs entirely on Amazon AWS. Obviously, there are lots of interacting systems, from the video pumps that push out movies to people watching, to recommendation engines, billing engines, search, administering all the movie files, and so on. Chaos Monkey intentionally disables systems to make sure that the overall system is resilient to failure. This is not done on a development platform: chaos monkey kills off real processes running their system live in production. It was developed in 2011, and now there are other tools that make up the wonderfully-named Simian Army. Patching systems, the standard response to software security holes, is a challenge: Installing hypervisor patches across weapons systems is really hard and the DoD is really bad at it. The research objective is to develop tools to manage the risk of untrusted chips with untrusted software to enable computation on sensitive data and missions. Blake Roberts of Lockheed-Martin Blake emphasized that security is now the #1 concern for many people, but there is still pressure to reduce the amount spent on it. The only solution is: Design for secuirity up-front, rather than trying to do it after it's been designed. Gary Binder of Intel Intel has several platform-based security technologies. One challenge with Cloud Datacenters is that often people don't even know which country their workload is running in. Intel has SGX enclave-technology (SGX stands for Software Guard Extensions) which protects against software attacks even if the attacker has full control of the platform. As it happens, between the ERI Summit where Gary was presenting, and now, the Foreshadow vulnerability has been revealed publicly, which compromises SGX. Like Spectre and Meltdown it uses flaws in speculative execution to extract data from the SGX processor itself, such as keys. As Yuval Yarom, one of the team that discovered Foreshadow, said: We thought speculative execution could get some information from SGX, but we weren’t sure how much. The amount of information we actually got out—that took us by surprise. Jason Moore of Xilinx FPGAs are heavily used by the military, since their volumes are low. If they can live with the higher power and lower performance, FPGAs are ideal for low volume designs. But, as Jason pointed out: Our parts end up in your weapons systems for 20 years and we can't change the silicon. Jason said that they assume the adversary has reverse engineered Xilinx's silicon, so like in encryption, they assume everything is known except the key (this is known as Kerchhoff's principle and dates back to...yes, really...1883). Xilinx is focused on ion beam and other attacks like that. Commercial chips are all vulnerable since it is not economic to embed optical sensors in the substrates. But these esoteric approaches are extreme compared to software problems. As he put it: If we are going to allow malicious code to run in the part, there are a lot more things to worry about than Spectre and Meltdown. Chris Casinghino of Draper Chris is focused on the hardware-software boundary, instruction set architectures, microarchitectures, and so on. He said that there are 3 big problems for hardware security: Side-channel attacks: There will continue to be side-channel attacks like Spectre and Meltdown discovered all the time (Exhibit A: Foreshadow that was announced in the weeks after the ERI Summit, although discovered earlier). We need new tools for thinking about ISAs since there are effects not captured by the ISA specification. Hardware cyber asymmetry: There are millions of lines of code for the specification of a chip. Something to compromise that may just be a few hundred lines. You can't find it by visual inspection, the needles only get smaller and the haystacks get bigger. Hardware needs to evolve for new threats: Hardware has long lifetimes, especially in the DoD. Security patching microcode is not always viable. But vulnerabilities get discovered during those long lifetimes. These 3 problems are impossible and there are no clear solutions. Chris says that there is hope: Open platforms, in particular RISC-V. Unless you are one of the big CPU vendors, Chris pointed out, you don't have access to real world processors. There are two DARPA programs building on open hardware, POSH and SSITH. SSITH is not specifically about RISC-V, but it gives a platform for experimenting. "And if RISC-V hasn't taken over the world by the time the program ends, we can move it to another platform." Fomal Verification successes. Formal techniques for software security have outpaced those for hardware security. The first innovation was formalization of security properties, and then proofs that those properties are preserved in (software) implementation. We need a similar approach for hardware. Codesigned HW/SW security, perhaps using the capability model such as Cheri. Capabilities provide system-wide security guarantees, and information flow control (SAFE). [A capability is a hardware key that gives access to a resource. Think of it as a hardware version one of those links you can create on Dropbox or OneDrive that you can then email to someone else giving them access] Groups We then split into groups and discussed various aspects of the problem. Spoiler alert: we didn't solve the hardware security problem. A few points from the group presentations that I think are worth calling out: Standards for security, and metrics to measure them. DoD can't tell a prime contractor what they want and have the contractor go and do it since these are lacking. Supply chain security. We need more formal approach to end-to-end security, and also a change in attitude where security is boolean (secure/insecure) to where it is statistical and probabilistic. Analogy: what is the best security system created by mother nature. The immune system. We do vulnerability protections (wear masks, wash hands), but our body responds to attacks so we don't get wiped out by new viruses. The body has a "machine learning" system to see if it needs more defenses ("I feel sick") and engage them (get medicine, go to doctor). Immune system has anti-fragility and adapts through hypermutations of the T-cells. That is why immunization works for years. We could draw on these ideas. I'll give Walter Weiss the last (depressing) word: Attackers have been wildly successful at everything they've tried. And they stop when they are successful, so we don't know what they are capable of. Sign up for Sunday Brunch, the weekly Breakfast Bytes email.
↧