At HOTCHIPS 2017, we had a special break so we could watch the eclipse. Of course, Cupertino wasn't on the path of totality. We had to share a very limited number of eclipse glasses since the ones the conference had organized were stuck in customs. Since eclipse glasses are not a lot of use after the event, I expect you can get a really good deal and get ready for the next total eclipse in 2019. But you will need to go to South America to use them. Carbon + Silicon = Artificial People The afternoon keynote was by Dr Philip Alveda, who was a DARPA program director and, after leaving DARPA, he started Cortical. What he was talking about was summed up as combining AI and neuroscience, or the catchier "Carbon + Silicon = Artificial People." DARPA was working in these areas before Philip got to DARPA. They are very interested in automatically controlling things directly with brains, both for prosthetic devices for wounded soldiers, and also to add superhuman capabilities. There are lots of challenges, since we don't yet understand the brain well, and it is hard to get a high bandwidth interface. One early project from 2008 is a monkey controlling a robot arm. This was the first time a creature controlled an electronic systems with several degrees of freedom totally under mental control. See the video (this may not be exactly the one he showed). (Please visit the site to view this video) The trend has been to increase the number of degrees of freedom and also to make it operate faster. Jan Scheurmann is a quadriplegic who has electrodes implanted in her brain who has been on 60 Minutes and other news programs over the years. The dream is be to fast enough to catch a ball, and with enough degrees of freedom to play the piano. One big limitation is that you can't just implant any device in a person without the FDA having approved it, and this has been limited to the "Utah array" which is really 1980s technology first available in 2005. It has 0.1mm resolution, just 100 wires, and noisy signal. In 2017, this was the only implant in use. But what is needed is to be able to read a million neurons, and write a hundred thousand. There are new technologies approaching this, which work in animals, but haven't yet been pushed through the FDA. Analysis of what is going on in the brain (to understand the brain) has advanced a lot and we can now look at the cortex of a rodent and see individual neurons computing. However, we don't know what all those flashes of neurons mean although machine learning is being applied to the problem. It has been known for some time that pulsed neuron coding is used, but we didn't know the code. The channel capacity is 2 bits per neuron and with Shannon limitations that seems insufficient. But we can do behavioral experiments with patterns that are really close (like to audio tones) and get a couple of magnitude higher performance, higher than the theoretical channel limits. There is a lot more going on than discrete bits and firing rates. We need to learn more of the tricks. The brain peaks at around 25-30W of power, computing at GFLOP rates. We will never get to connecting to a million neurons the way we have been going about it. Another development is that the wires mean that there is an open wound that cannot heal entirely, so it only lasts about six months. But now we can start to use optics and planar photonics. The interfaces are starting to be standardized. Neural Engineering System Design DARPA has been seeding the neuro-engineering and there is a lot going on. As Philips said: everyone takes your call at DARPA because you’ve got a lot of money All of these technologies existed but stove-piped. Universities didn’t have access to everything. Lots of stuff from computer science, to fabrication, to regulatory approval were needed to get everyone to talk to each other. DARPA hosted workshops for three years, defined a program NESD (Neural Engineering System Design) with a lot of money, and it turned into a three-continent effort. There is some freaky stuff being considered, like genetically modifying neurons to create light so that thinned CMOS arrays can be used to build a high-bandwidth interface with a patch external to the head. Today the state of the art is a chochlear implant with 15 signals. But we will go to thousands, allow the blind to see, the deaf to hear. There is work on meta-ai, combining voice recognition with vision to better understand when you can see someone's lips. It seems that the hippocampus is where it is all pulled together. Gradually we are standing more and more of how the brain works and thus the potential to communicate directly at the neuron level. For More Information The DARPA NESD program page has more details. Or see the Cortical website . Sign up for Sunday Brunch, the weekly Breakfast Bytes email.
↧