Occasionally I come across a mishmash of headlines that warrant mention, and these were all reported within the last week! Uber’s Autonomous Fail And so it begins. A self-driving Uber car has struck a pedestrian last night, causing the first pedestrian fatality caused by a self-driving (ADAS level 4) car. The pedestrian, a woman, crossed the street outside of the crosswalk, at which point she was struck by the Uber car, which was in autonomous mode at the time. As is required for Uber’s autonomous test vehicles operating on public roads, the car had a safety driver at the wheel, to be able to take control of the vehicle in case the system should fail or appear to be at risk of endangering others. The result? Uber has called off its testing of autonomous vehicles in all cities, including Pittsburgh, Toronto, San Francisco, and Phoenix—and rightly so. We have yet to see how this will play out—and you can bet there is going to be some legal fallout from this collision. Who is at fault? Uber? The human “driver” who wasn’t driving? The people behind the coding and engineering of the vehicle? Who is the insurance company going to sue? How did this happen? No one has speculated a guess, yet. Uber’s spokespeople say that the company is “fully cooperating” with local authorities, and the NTSB is sending a team to investigate, as well. I am sure Uber wants to get to the bottom of what happened just as much as anyone. I suppose the results of the investigation will lead to what the legal ramifications may become. I also can’t help but wonder how many pedestrians were killed yesterday in the United States by vehicles powered by human drivers. What is the level of acceptable risk, and is this single fatality the embodiment of the miniscule “acceptable” risk? Surely the loved ones of the woman who died don’t think so. That said, humans are notoriously bad at assessing risk—as you know, you’re more at risk driving to the airport than being in an airplane, but you don’t often hear of people being more frightened of riding in a car than flying. Ann Mutschler just recently published an article in Semiengineering.com about the Anatomy of an Autonomous Vehicle Crash , where she explored exactly this situation. She talks about the traceable data required in autonomous vehicles. From the article: “Traceability is going to be huge with the amount of data that we can actually collect, store and process,” said Sundari Mitra, CEO of NetSpeed Systems. “And, it need not all be on a vehicle. There is a certain amount of data that must be computed on the vehicle to make the rapid decisions in the case of a potential accident and the ability to make evasive measures. Of course that needs to happen right then and there, but the data the car has been collecting even seconds and minutes before may not necessarily sit on the vehicle. The traceability piece is going to be a huge advantage of the autonomous vehicles because there is so much information gathering ability in the newer vehicles, which was completely missing in the past. Otherwise, we are relying on the memories of drivers who were emotionally charged at the time of an accident, which is absolutely the wrong time to be confident about remembering anything.” Where the data from the crash is stored so it can be retrieved and analyzed will become critical. Not only is the “disaster” data important, but the hours and hours of simulation and testing will become something that autonomous vehicle makers will have to solve. (Good thing Cadence specializes in simulation, verification, and emulation…) It’s a thought experiment come alive, and time will tell how this whole thing shakes out. I’ll be watching. So will many of you, I expect. This Call May Be Monitored for Quality Assurance Again in the no-surprise category, AI is popping up everywhere. As reported today in Wired magazine , it’s even coming up in insurance company call centers. To quote the article: A program called Cogito presents a cheery notification when the toll of hours discussing maternity or bereavement benefits show in a worker’s voice. “It’s represented by a cute little coffee cup,” says Emily Baker, who supervises a group fielding calls about disability claims at MetLife. The cup reminds the call center representative to change their tone and level of engagement to improve the customer experience, and the voice analysis algorithm also tracks the customer reactions. For example, when the agent sees a “heart” icon, the software has detected a heightened emotional state, either positive or negative. This gives agents a kind of sixth sense, which may humanize the non-human experience. I applaud the company for recognizing that calling the insurance company and dealing with claims is not high on the list of things that are fun to do, and very often are caused the worst day in a customer’s life. Having a real person on the other end who really does (seem to) care may help mitigate that painful experience. Cogito was born out of researchers at the MIT Human Dynamics Lab and funding from DARPA to develop an AI platform and behavioral models to interpret human communication and detect psychological states. Of course I have to wonder whether machines that can detect emotion make us more or less human. Google Searches for Tracking Public Interest Pew Research reported last week that Americans’ interest in guns can be tracked by Google keyword searches. Using analysis that they used for measuring public interest in the Flint, Michigan water crisis and refugee migration patterns in Europe, the four key findings from the analysis are: Google search activity for specific gun models tens to rise and fall in a similar pattern to the number of background checks conducted by the FBI Google search interest in guns correlates with the population-adjusted number of FBI background checks at the state level National search interest for gun models tends to increase during the holiday season (And this is the most interesting to me…) Unlike the aftermath of some other recent mass shootings, Google search activity did not increase in the months of the Las Vegas or Parkland attacks Talking about guns is beyond the scope of this blog, and I leave you to make your own conclusions about what this means. But I am also interested in the method to track the madness. Using Google analytics as a gauge for such things is fascinating. Talk about big data, I don’t even know how much data Google can generate analyzing keyword searches alone. This isn’t new—back in 2008, Google’s Flu Tracker (GFT) showed that using keyword searches, the CDC could track flu outbreaks faster than their traditional data. That said, the GFT failed spectacularly in 2013, missing the prediction of the 2013 flu season by 140%, and the program was, as reported by Wired magazine, “quietly euthanized”. As I mentioned in a previous blog post about Weapons of Math Destruction , big data is a powerful tool, and it must be wielded with great caution and care. New forms of data processing and careful algorithms have to be used to bring more reliable results to the forefront. —Meera
↧