Deep into that darkness peering, long I stood there, wondering fearing, doubting, dreaming dreams no mortal ever dared to dream before. —Edgar Allan Poe I have a problem. As I try to go to sleep at night, I allow my mind to wander, and I think of pictures and shapes and geometrical quandaries and designs about cities and smart cars and then I get so interested in what I think about that I can’t go to sleep. Counting sheep doesn’t help. And since I am not an AI, electric sheep don’t make a difference... how many artificial sheep can jump over the electric fence? What is the purpose of electric sheep? Do they grow acrylic instead of wool? And so on. By the way, the title of this post refers to Philip K. Dick's short story, Do Androids Dream of Electric Sheep? , which served as the basis for the movie Bladerunner —and gives us all a whole 'nother thing to think about as we try to go to sleep at night. This blog post is the result of one of those nights of thinking about stuff. Since learning about the Deep Dream Generator using recumbent neural networks to create computer images, and Nvidia researchers having built a system that can alter street photos to show different weather conditions (see page 7), and now Google is exploring a technology that can generate its own videos —not to mention how Nvidia is using what the New York Times is calling a “ cat-and-mouse ” game to create increasingly plausible “fake” images of people—everyone keeps saying that our perception of reality is going to change in this era of AI. In this era of “fake” news, and people calling news “fake” when it’s actually real, is the end game to this line of developments the inability to distinguish between what is real and what isn’t? (Keep in mind, though, that with the advent of the moving pictures of the early 1900s, the image of a moving train coming towards the movie screen caused people to become quite alarmed and women to faint. So this question of determining what reality actually is has been part of our… well, really for the last hundred years, at least.) The thing that gives me pause is the fact that we don’t know how these unsupervised learning machines work. What happens when we literally let these bots loose on the streets? How long before they become self-aware? Supervised vs. Unsupervised As a reminder, supervised learning is when we “train” a system to recognize something and then act on it, like a home security system that uses some kind of recognition to open a door when it recognizes a family member. It’s a 1:1 correspondence: Meera’s face (or voice or fingerprint or whatever) = open the door; not Meera’s face = keep the door locked. Unsupervised learning is when we start a system off by giving it some basic data, and then give it oodles of data and tell it to figure out how to respond to the data; for example, the system I described in Artificial Intelligence: A Love Story . It involved a bit of research done by Facebook to have two machine entities learn how to haggle—but in the experiment, they neglected to tell the machines to use English to do so. The result was that the bots made up their own language to perform the negotiation. The experiment was shut down because the results weren’t relevant to their part of the research, but the concept highlighted the fact that we simply don’t know how these systems work, although they were built by humans. Are there implications about how language developed, or are we more interested in the fact that it didn’t take long for computers to transcend our understanding of language? As Adrienne LaFrance, editor of TheAtlantic.com said: Already, there’s a good deal of guesswork involved in machine learning research, which often involves feeding a neural net a huge pile of data then examining the output to try to understand how the machine thinks. But the fact that machines will make up their own non-human ways of conversing is an astonishing reminder of just how little we know , even when people are the ones designing these systems. AI and Art When I look at the Deep Dream Generator , I can’t help but wonder about the aesthetic “decisions” made by the AI used to create it. This isn’t just a human using Photoshop to manipulate an image. The way that this system works is that the image feeds visual information into an unsupervised learning bot, multiple times, with certain features—decided by the bot itself—enhanced and made stronger with each iteration. Depending on the initial training it received, the bots “saw” things in the images and enhanced those images; for example, the “Deep Dream” image shows fish and birds and dogs and eyes—probably because in the initial training stage, the images that bot received heavily favored animals. Other versions combined images, using “Deep Style”, which allowed the combination of two or more images to create something entirely new. Not only are we not sure how this actually works, but this raises so many questions about art. What is art? I’m not going into philosophy 101 about what is the difference between good art, bad art, and not-art. Anyone who creates anything, whether it be art or the written word or code or architecture or engineering or even the perfect egg salad sandwich, should answer this question somehow; otherwise, you can’t distinguish quality in your own work. I have always worked under the definition that art is what the artist believes in their own heart is art. (The distinction between good and bad art is a bit muddier, and is the topic for another time.) But when it comes to these amazing images by the Deep Dream Generator, who makes the decision between art/non-art? Is it the creator of the art-bot? Is it the art-bot itself? It can’t be the artist, which is a machine. Can it? Wrapping up I don’t have any answers. I’m just thinking about this stuff, and wondering about dreams and art and thinking about the future as I prepare to attend the latter half of CES in Las Vegas next week. I’m thinking about dreams as I try to sleep at night, and when thinking about AI, I can’t help but wonder what AI dreams look like. And to quote Philip K. Dick, do androids dream of electric sheep? I suppose we’ll have to ask them to keep some dream journals before we can find out. —Meera P.S. Do come and visit Cadence at CES. We are at MP25777, South Hall 2. Find the drones and keep going (really, we are literally behind the area where all the drones are)! Also, check out Paul’s CES Preview post in Breakfast Bytes about some cool demos that we will be sharing. P.P.S. And happy new year! Please shoot me an email at mcollier@cadence.com if you have an idea you’d like for me to explore here, or have thoughts or comments about what I’ve written. As this post shows, leaving my brain to ferment in isolation leads me to wonder about some pretty bizarre stuff!
↧