In early October, the Senate Commerce, Science, and Transportation Committee unanimously approved the first comprehensive federal legislation regulating self-driving cars. The bill, which now goes to the full Senate, is very similar to the SELF DRIVE ACT already approved by the full House of Representatives in early September. The measure will allow the deployment of self-driving cars without human controls. Within three years, the bill would allow automakers to each sell up to 80,000 self-driving vehicles annually if they could demonstrate they are as safe as current vehicles. Auto safety advocates complained it lacked sufficient safeguards. Under the measure, states could not put up regulatory roadblocks, but would still allow for lawsuits in the case of defective vehicles.
The measure has the backing of a major industry coalition including automakers such as Ford and General Motors, as well as a host of technology companies including Alphabet (parent of Google) and Intel.
But despite this advance on the legislative side, there are still some major issues that must be resolved before we start seeing lots of driverless vehicles on the road. In the wake of a slew of high-profile cyber-attacks, the issue of car hacking has moved to the forefront of the list of concerns for self-driving cars. The Department of Transportation, in conjunction with the National Highway Traffic Safety Administration (NHTSA), has spent several years working on proposed rule-making for self-driving cars. In their latest revision of those guidelines, “Automated Driving Systems 2.0: A Vision For Safety,” released this September, the DOT specifically calls on the developers of self-driving cars to address cyber-security concerns. After all, having your computer hacked while you’re at your desk is one thing. Having your car hacked while you are traveling at 65 miles an hour could be life-threatening.
So far most of the self-driving car guidelines have been voluntary, but by the time these cars hit the road, some cybersecurity protections could well become a mandate.
There’s already evidence that automotive systems can be hacked. A case in point: a series of experiments found that a hacker with either wired or over-the-internet access to a vehicle could disable or slam on a victim’s brakes, turn the steering wheel or even cause unintended acceleration. An even more serious concern is the possibility of hacking a moving vehicle to turn it into a weapon.
We spoke with Michael Macauley, CEO of Quadrant Information Services, which provides analytical data to the insurance industry about some of the cybersecurity issues, who says:
I think with any technology regardless of its level, there’s always hacking potential. Every measure done to protect it will be hacked and someone will get through. That’s an ongoing issue and I don’t think that’ll go away. God only knows why people want to hack things. They do it just for the challenge. But I think that someone could get access to something and do a lot of harm, could disrupt it, but that’s always a concern of mine with the autonomous vehicles. You’re working off the big system someplace in the cloud if it’s automated there’s the opportunity for someone to break into it and there’s that exposure. I don’t think there’s any way around it. If it’s built, it can be hacked.
I asked McCauley about whether there’s a greater risk for cars that can communicate with the cloud, or to other vehicles, than for those that are truly autonomous:
I don’t know that perhaps the car Ford, Volvo, or anyone else are thinking about are going to be anything but self-contained. But even if it’s self-contained, it’s still relying on the GPS so you’ll find if we’re concerned about hacking, there’s still a way to hack that. Whether they hack the GPS system or hack the individual car, I don’t know, I’m not a hacker, they’ll find a way around it. I think the cars will be somewhat self-contained, not dependent upon a big UBER system that decides where your car is going. The technology I see coming out, I don’t see it as being part of a big network of a bunch of cars. Cars will need to be reliant on their own technology to run.
The issue of cybersecurity has become a significant concern for the companies promoting self-driving cars. Uber tells us it has a team dedicated to building a system that is protected against cyber-attacks. At one point, some of that team included vehicle hackers themselves. Uber says having folks on the team who have experience hacking into the cars already on the road will allow it to better secure Uber’s system since they understand the vulnerabilities that currently exist in traditional, non-self-driving vehicles.
Both Uber and archrival Lyft are already implementing tests of self-driving vehicles, and both have partnerships within the automotive industry to get the cars on the road. Uber has trials taking place in both the Pittsburgh, Pennsylvania area and in Tempe, Arizona. Riders in both locales could be matched with a self-driving Uber when they request an UberX in those cities. Right now, these self-driving Ubers have an operator in the front seat to monitor vehicle behavior.
Uber says that in just one year since launching its Pittsburgh pilot its cars have driven over 1 million autonomous miles and completed over 30,000 trips. Uber just released this video that addresses some of the advanced technology that goes into its driverless vehicles:
Uber has development partnerships with both Volvo and Daimler. Lyft, which has a pilot program in San Francisco, recently announced a partnership with Ford and has already been working with several other companies including Waymo (a Google company), Jaguar, and GM, which is a major investor. Lyft has indicated that it wants to build its own system, so it won’t be totally tied to the timeline of its partners.
Both Lyft and Uber see self-driving cars as critical to the future of their businesses, and each would like to be first to market with a viable system to grab the technological and financial advantages of having the first successful self-driving fleet.
But even after dealing with the automakers, the government, and the technology, there’s still at least one major issue that could hold up driverless cars. Who is liable? In the event the self-driving car has to make a decision between driving off the road and potentially killing an occupant, or hitting a child who has run into the street, who will have to pay?
QIS’s McCauley says there’s no clear answer. The person in the front seat isn’t really the driver, so how can that person be responsible? Could it be the car’s maker? Or the provider of the navigation software? No one knows for sure. And the insurance companies have plenty at stake; billions of dollars paid to them for premiums on liability insurance:
When I ask insurance companies what they think of autonomous vehicles, they kind of all look at each other like we don’t know what to say about autonomous vehicles because we don’t have anything to make an assessment. So that’s kind of where insurance is. First of all, it’s my car, but it’s the manufacturer’s technology so what am I insuring? I’m not driving the car. Your technology is driving the car. Are you insuring it, or am I insuring it? And no insurance company has been willing to even address that. I guess the one thing that Allstate had told me is they kind of laughed about autonomous vehicles and said, “We’re going to let the courts decide.” I guess they’re leaving it to the courts to decide who really is at fault.
But McCauley believes that the process is now inevitable:
The technology can’t be stopped now. It’s not just a Jetson’s mentality. Opportunity is there to build something that hasn’t been built before. In fact, I think Ford Motor Company has projected that in 2021 they’ll have a car for sale that is an autonomous car, and that’s not a prototype, that’s a ‘go down to your ford dealer and buy an autonomous car.’ And 2021 is not too far away.