I have been working with computers for a long time. I first learned to program in the late 1960s, when a computer was a mainframe in a special room with its priesthood of operators, a long way from the PC, never mind smartphones. Since I was often the only person that they knew who actually worked with computers, people would ask me if computers could think. I told them that nothing computers could do would qualify as thought but maybe one day it would. I knew nothing much about computer hardware as a teenager, and certainly had never heard of Moore's Law, which was a few years old. With the rapid advances in deep learning, AI, big data, vision processing, and that whole family of technologies, the question is being asked again, although it usually is phrased as "will computers one day be conscious?" David Gelernter and Tides of Mind One of the compensations of having a long commute is that I listen to a lot of podcasts. One I like is Econtalk , which is hosted by Russ Roberts and has a new hour-long episode every Monday, rain or shine. It has been going for 10 years. Although occasionally the format varies, normally he has a guest who often has just published a book or important paper, usually on something at least tangentially related to economics, but not always. A couple of weeks ago it was David Gelernter on his book The Tides of Mind, Uncovering the Spectrum of Consciousness . He is a professor of computer science at Yale. Some of what he said in the podcast was interesting, but I want to focus on one small part where he said: On the one hand, I think AI (Artificial Intelligence) has enormous potential in terms of imitating or faking it, when it comes to intelligence. I think we'll be able to build software that certainly gives you the impression of solving problems in a human-like or in an intelligent way. I think there's a tremendous amount to be done that we haven't done yet. On the other hand, if by emulating the mind you mean achieving consciousness-—having feelings, awareness-—I think as a matter of fact that computers will never achieve that. They will never be conscious; they will never feel an emotion. They will never be aware of anything in the sense in which we are aware of things. All the evidence that we have suggests that consciousness is an organic phenomenon, is a biophysical phenomenon associated with a very special type of physics and chemistry. The only instance of consciousness that we're aware of, in the cosmos—granted we've only looked around on this planet, but at any rate, we haven't heard of any other instances so far; and there are many, many kind of life on this planet. But the only consciousness we're aware of is associated with highly sophisticated and complicated animals, is associated basically with human-like creatures, of whom there are very few compared to the generations of bacteria, which completely dominate any list of all life forms. I find this a gross oversimplification since it reduces to "computers can't be conscious, since the only consciousness we know of is built out of organic chemistry, and computers are made of silicon." Searle's Chinese Room I had a similar opinion when I first heard about Searle's Chinese Room thought experiment, which dates back to 1980s. Searle is a philosophy professor at Berkeley. There are various forms of the experiment, but basically there is a machine in a room with a large reference library (this was pre-internet, of course), and you pass cards with Chinese symbols on through a slot, ask any question you want, and the machine inside has a clever algorithm that processes the symbols, produces a response, and answers the questions. Searle's question is whether the machine literally understands Chinese and his answer is negative, it does not. Philosophy graduates seem to regard this as a deep question. To me, there are three answers to the question of whether computers can think, whether they could be conscious, whether they understand Chinese and so on. The three answers depend on the questioner's definition of things like "understand Chinese" or "exhibit emotion". The questioner's definition is broad: Yes, computers can think, understand Chinese, be conscious. The questioner's definition is narrow: No, computers cannot think by that definition. But also, by that same definition, it is not clear that humans can think, understand Chinese, be conscious. The questioner's definition rules out computers thinking (or whatever is being asked about) by defining thinking as something that only biological systems can do. I think that by any reasonably definition of thinking, and by reasonable I mean that it is obvious humans do it, that computers will be able to think. By which I mean they will pass stronger and stronger versions of the Turing test, in which you cannot tell if you are communicating with a computer or a person. I may be philosophically naive, but it seems to me that if you can't tell whether you are getting your Chinese questions answered by a program or a person, then they either both are "understanding Chinese" or neither of them is. Can Blondes Understand Chinese? In fact, imagine another Chinese thought experiment: can blonde-haired people understand Chinese. After all, the most cursory scan of the world population would show that pretty much everyone who understands Chinese has black hair. If you come across a blonde-haired person who can understand Chinese, it would seem totally weird to debate whether he or she "literally understands" Chinese or just is just giving a very good simulation of it. We know people can speak foreign languages, we may even speak one ourselves, so that distinction doesn't even seem to have a clear meaning. I don't see why it is different with an artificial intelligence program. We are not yet at the stage of conscious computers. A version of Searle's Chinese room argument is "Does Siri understand English?". Or the Android equivalent. A good answer would be "not all that well yet" based on experience. I think it would seem really weird to reply "No, your phone and the computers in the cloud are all made of silicon so they are only simulating that they understand English." Of course, if professors of philosophy and computer science from prestigious universities disagree with me, there may be something I'm missing. Exercise for the reader. Previous: IEDM: The Big Decisions for 5nm
↧