Crash avoidance technology is fast becoming ubiquitous in new automobiles and is no longer reserved only for the high-end models and manufacturers, as the race to perfect self-driving technology is reaching a fever pitch. All of which begs the question: “Will we actually be safer as a result of self-driving vehicles?”
The push for this tech has been largely born of the hope to reduce the frequency of motor vehicle related deaths and injuries. After all, statistics show that more than 90 percent of car crashes in the United States have some form of driver error as a root cause. It stands to reason that be eliminating this source of error, we can save lives. But the path to accomplishing this goal presents more challenges than many realize.
First, implementation of these systems is unlikely to occur without problems. Experience drawn from implementing automation in various industries, including aviation, suggests that we should expect to see an increase in crashes and other “adverse events” as these automated systems are introduced. This may prove to be significant in the field of self-driving technologies, as several studies have suggested that people are less-tolerant of errors made by automated vehicles and other “artificial intelligence” than they are of comparable human error. We have already seen indications of this in the wake of recent crashes involving self-driving vehicles. For example, on March 18, 2018 a pedestrian in Arizona was killed when she was struck by an Uber which was using self-driving tech. Just a few days later, a person was killed in Northern California when a Tesla Model X operating in semi autonomous mode hit a highway barrier and subsequently caught fire. News reports over the past several years include a number of other similar crashes, including a 2016 crash in Florida which occurred when another Tesla running with its “autopilot” activated reportedly collided with a truck which the system apparently did not see.
Second, these systems are likely to co-exist with human actors and environmental situations which limit their effectiveness for many years to come. And the race to implement these systems may actually make that situation worse. At present, there is no universal standard in place to permit these systems to “talk” to each other, which limits their effectiveness and creates the opportunity for compatibility error. Further, it may take 20 years or more for most of the cars on the road to “turn over” such that automation, up to and including self-driving vehicles, become “the norm.” The interaction of human controlled and computer-controlled vehicles creates new legal questions, up to and including how fault will be apportioned in this “brave new world.” Will we presume the machine was “in the right” and the human was “in the wrong” unless definitive proof exists to the contrary?
At least one study out of the University of Michigan’s Transportation Research Institute has added fuel to this debate, as it has shown that self-driving test cars were involved in crashes at approximately five times the rate of conventional cars. Even after adjusting the data to account for the fact that many “conventional” vehicle accidents are not reported, the study still concluded that the accident rate was twice as high. To be fair, the data sample was quite small and the self-driving vehicles were certainly not always responsible, but the study does suggest that the simple introduction of self-driving or other “automation” will not necessarily improve safety or bring about the mass reduction in crashes which is driving the push for automation.
Finally, another statistical factor may yet play a role: The fact that we don’t truly know the rate at which humans successfully avoid collisions. Humans may, in fact, be better than these technologies at reacting to the uncertain, ambiguous situations which we encounter every day on our nation’s roadways. Tricky environmental or lighting conditions may prevent data sensors from accessing necessary data. And what of the “no win” situations in which a human operator can choose, but which a self-driving system has no programming to address? Automated vehicles in their present configurations lack the ability to use foresight to avoid potential hazards and, instead, operate mostly to address what happens “in the moment.” These scenarios barely scratch the surface, but suggest that the best path, at least in the foreseeable future, may be for human operator and machine to work together.
If you’ve been involved in and accident with a self-driving vehicle, you may have a case. Contact a car accident lawyer Trenton, NJ trusts today to discuss your options today.
Thanks to our friends and contributors from Davis & Brusca, LLP for their insight into self-driving cars.