Quantcast
Channel: Cadence Blogs
Viewing all articles
Browse latest Browse all 6660

Linley Gwennap on the Microprocessor Market

$
0
0
Linley Gewennap always gives the opening keynote for the Linley Microprocessor Conference. This year, he titled it Processor Innovation Supersedes Moore's Law . The basic theme was something that I've talked about before (for example, Are General Purpose Microprocessors Over? and Fifty Years of Computer Architecture: the Last 30 Years ), Moore's Law Is Over The theme is that we don't get better processors just from sitting around waiting for Moore's Law to do its thing, like in the past. One reason is that the breakdown of Dennard scaling means that clock rates can't increase much. The second is that the cost per transistor is not going down much with each process node. It is interesting, for example, that the die size of Apple's A11 has gone down because they can't just double the number of transistors because it would double the cost, plus there may not be enough things of value to do to add to the chip anyway. Linley showed the graph to the left as the Linley Group's view of cost-per-transistor. There are lots of different versions of this graph—Intel, for example, shows graphs that have the cost going down along the Moore's Law line, but generally people believe that the cost is not declining as it did in the past. The consequence of this is that if you want better performance then the only way is with specialized processors tied to the application. General-purpose processors have too much overhead, whereas specialized processors can do a lot more per "instruction" and reduce the overhead. For example, if the instructions are already well ordered, there is no need for instruction reordering. Of course, the obvious problem with specialized processors is that they only work well on the problem they are targeted for. A GPU is not a great solution for doing digital signal processing, for example, and vice versa. The future is clearly that the general-purpose CPU handles legacy code and coordinates the work of accelerators for the big meaty algorithms like search, training neural networks, or the signal processing for 5G modems. Another consequence is that the CPU performance becomes less important, since the heavy lifting is done by the accelerators. Deep Learning Of course, one area where specialized processors do much better than general-purpose CPUs is deep learning, both for training (typically in the cloud) and inference (often on the edge device). For training, the three most attractive approaches are: GPUs, which provide a huge amount of 32-bit floating-point processing with high parellelsim FPGAs, which are a sort of blank slate on which you can build a specialized learning engine Specialized deep-learning processors like Google's TPU and SoCs from a number of startups For inference, if it is done in the cloud, the same approaches can be used. But on the device, with a limited power and performance budget, specialized DSPs are the best approach. These are sometimes called neural engines. A good example is the Tensilica C5 DSP, which Cadence presented later that morning (see my post Scaling Embedded Inference Performance for Deep Learning ). On the second day, the keynote was by Chris Rowen, who covered this topic in more detail. Watch for that post tomorrow. Server Processors The huge growth of the size of datacenters, especially public cloud datacenters, drives the server processor market, of course. Intel has, in recent history, had huge market share above 95% and they hope to maintain that with the Xeon Scalable (now available in platinum, gold, and silver to replace E3, E5, etc.). This has more cores, but only small performance increase, but also more memory bandwidth. Meanwhile, AMD has re-appeared in the server market with its new EPYC server, which has performance roughly equivalent to Intel, but with even more memory bandwidth. It is built in the GF 14nm process (actually the Samsung process), which is behind Intel's in terms of dimensions (Linley describes it as "half a node"). The datacenter operators obviously want competition in the processor market and AMD has wins at Tencent, Baidu, Microsoft, Dell, HPE, and Lenovo. Meanwhile, there are developments in other (non-x86) architectures: IBM created OpenPower to promote the architecture in third-party servers Oracle stopped SPARC (the old Sun architecture) development and laid off the team Qualcomm shipping its first server chip Centriq, a 10nm chip with up to 48 Arm processors and 6 DRAM channels Cavium ThunderX2 is nearing production with up to 56 Arm processors, 6 DRAM channels and accelerators AppliedMicro X-Gene is "in limbo" after the Macom acquisition High-end embedded is also moving to ArmV8 (Cavium, NXP, Broadcom, Mellanox...) with the results that embedded processors are moving up into the datacenter (especially in network processors) and the server processors are also moving down into high-end embedded. Another driver of this is the move towards open-source solutions for network function virtualization, in particular OPNFV. Internet of Things System architecture, at a very high level, is increasingly cloud datacenter plus standalone devices aka edge-devices. This is the Internet of Things or IoT. Industrial IoT is taking off, driven by the economic return, but consumer IoT is slow to take off since the use-cases are not compelling. The prices are too high for the functionality, they are too hard to use, and the lack of standards means that you can't just buy, for example, a smart light bulb and have it "just work" with whatever you already have in your house. Of course, the most interesting "edge devices" are vehicles. As the cost of adding autonomy or advanced ADAS falls, deployment will increase and the price point of the vehicles where it makes economic sense will decline (it is easier to justify a $5,000 addition for a $60,000 car than a $10,000 car). If insurance rates drop to acknowledge the decreased accident risk, that could also help drive the adoption—if your car payments are $1,000/year more to cover autonomy, but your insurance is $1,000/year less, that is an easy decision. Of course, a lot of what is required for autonomy is deep learning and vision (see above and tomorrow). Processor IP Processor IP trends mirror the same trends, in particular the move towards more specialized processors (such as our Tensilica P5, C5, P6...err, this naming is as confusing as Arm's but in a different way). Interest in RISC-V continues to grow, with new cores available from Cortus, Andes, and SiFive, and more software support. If you are interested in RISC-V, then the next workshop is November 28-30th ( information and registration ). On the GPU front, Apple deployed their own GPU in the A11, which is significant because previously they used a version of Imagination's PowerVR GPU. Meanwhile, Imagination split their MIPS business from their GPU business and then the whole company, or maybe only part of it, may or may not be bought by Chinese money fronted by Canyon Bridge. Summary The one-line summary is that processors are getting more specialized, especially in deep learning, which is a large, dynamic, fast-growing market. IoT is taking off slower than forecast. Intel is still king of the datacenter but AMD is back to being competitive, and Arm's licensees are making a serious attack. Sign up for Sunday Brunch, the weekly Breakfast Bytes email.

Viewing all articles
Browse latest Browse all 6660

Trending Articles