Sometimes old ideas become new again. That may be the case with analog signal processing, which is far more efficient than digital signal processing in some applications, according to Boris Murmann, associate professor of electrical engineering at Stanford University. Murmann gave a keynote speech titled “Back to the Future: Analog Signal Processing” at the recent Mixed-Signal Summit held at the Cadence San Jose headquarters on Oct. 28, 2014. In his talk, Murmann described electronic systems where analog circuitry can play a significant role in the acquisition and processing of information. His examples included ultrasound imaging, power amplifier linearization, and machine learning. While all computing was originally analog, and people built analog computers into the 1960s, the situation today is very different, Murmann said. “Most of the stuff we call mixed-signal has the same boring block diagram,” he said. “It has some kind of transducer followed by an amplifier, maybe a mixer or filter, and an ADC [analog to digital converter], and then comes this big digital brain that does all the fun stuff. You could think of the analog front end as just a waveform mapper.” Murmann is convinced that we can do better. “I want to talk about some ideas for giving the analog circuitry a much more active role in terms of extracting or processing the information that we care about,” he said. This can be done with “analog to information interfaces” that extract real information, not just precisely mapped waveforms. Ultrasound Imaging One application that can benefit from such an interface is ultrasound imaging. It’s a huge market that ranges from big machines in hospitals to handheld devices that are just starting to emerge. Crack open an ultrasound machine, Murmann said, and you’ll see massive parallelism coming from the number of transducers it has. This number may range from 64 to 512 transducers, and each one sends sound waves into a human body and measures the reflections. What you might end up with is hundreds of channels, A-D converters that often run at 50 mega-samples/second, and perhaps a dozen bits. This all goes into a DSP, and what comes out is a “lousy” output of 20 frames per second. “And that is incredibly ridiculous,” Murmann said. One problem, he said, is that designers are sampling at the Nyquist rate , which says that you have to sample at least at twice the maximum frequency. But it turns out that the ultrasound signal is actually sparse in the time domain. Signals carry just two pieces of information in each reflection: arrival time and amplitude. “We sample the hell out of this pulse again and again, and we acquire absolutely no information from most of these samples,” Murmann said. “This explains why we are digitizing so many bits to make a very lousy image at the end.” So what should we do instead? Sample at the “rate of innovation” rather than the Nyquist rate, Murmann said. This methodology relies on the assumption that ultrasound signals can be expressed with a parametric model that is sparse in time. You acquire signals that are proportional to the degrees of freedom in the signal. The parametric model may have to estimate 200 numbers, but it doesn’t need gigabytes of data—you can get by with a very reduced data set, Murmann said. There are some tricks involved in making this happen, Murmann said. You can’t just sample at a low rate as you won’t reliably pick up meaningful Discrete Fourier Transform (DFT) coefficients. What you can do is use a low-pass filter to help you remember the occurrence of pulses in time. Another tip is to maximize signal-to-noise ratio (SNR) by doing a quadrature down-conversion. “Grab the chunk of the spectrum that you feel is most useful for estimating limited degrees of freedom,” he said. As shown below the final architecture employs sub-array beamforming , a small number of low-rate channels, final beamforming in the frequency domain, and a shared reconstruction algorithm. Research shows a potential sample rate reduction of 20X to 50X compared to the Nyquist method. “So this turns out to be much more efficient than digital signal processing,” Murmann said. “What this may allow you to do is to build machines that are just not possible today.” On the other hand, he acknowledged, “we do lose a little bit of image quality.” Power Amplifier Linearization Murmann’s research group at Stanford is working to make power amplifiers more efficient. Today, he noted, basestation designs acquire a wideband spectrum and digitize giga-samples/second just to estimate 10 to 50 parameters. Prior work has used aliasing to sample at the Nyquist rate of the input signal. In this approach, the acquisition bandwidth of the analog-to-digital converter must cover the full signal bandwidth. But “it can be shown that you do not need to acquire this entire spectrum,” he said. Murmann’s approach, called DC down-conversion, obtains spectrum samples according to degrees of freedom and remains independent of signal bandwidth and the Nyquist rate. “If we have 10-50 coefficients, we ought to be able to get these coefficients by taking the same number of samples in some domain,” he said. “If you are taking the right samples in the frequency domain, you can perfectly back-annotate power amplifier coefficients.” After showing how this can be done, Murmann noted that “in our approach you converge after 100 samples. In the traditional Nyquist-centric approach, you take a thousand times more samples. If you know where the information is, you don’t want to acquire anything more than the information.” Moving Towards Machine Intelligence With the oncoming Internet of Things (IoT) era, “everything in the future will have to be much more intelligent,” Murmann said. IoT devices will include driverless cars, real-time language translation, natural speed processing, and robots. The problem is that you can’t have a trillion sensors sending data out continuously. “It all has to be processed locally,” Murmann said. However, 16-bit accuracy may not be needed. You could customize digital logic and give it a narrow bit width, but another approach is to use analog computation. With analog, he noted, you’re not bound by thermal noise, and you can go almost arbitrarily low in power dissipation. “There is an opportunity to gain an order of magnitude or so by doing these computations in the analog domain.” Murmann’s research group is currently taking on this challenge by doing multiply and add functions with switched capacitors. These capacitors have digital inputs and outputs, and can be plugged right into a digital chip, but they use significantly less energy than digital logic. A remaining challenge, however, is test. Murmann acknowledged that his ideas are not new, but he thinks the time is right. “A digital steamroller came in and flattened all these ideas, simply because process technology was improving way too quickly. News flash—this is probably coming to a stop. I don’t see energy [per digital gate] scaling dramatically anymore.” You can read more about Murmann’s work at his Stanford website. Richard Goering Related Blog Posts - Mixed-Signal Summit Panel: Why IoT Design is Harder Than it Looks - The Elephant in the Room: Mixed Signal Models - Mixed-Signal Keynote: Design Challenges at the Analog/Digital Interface
↧