Study Finds Autonomous Vehicles Likely Won’t Make Crashes a Thing of the Past

Posted by Marianne C. LeBlanc

Meet Marianne

Marianne is a trial attorney with over two decades of experience in representing clients and a member of BBO and served on the Regulators Subcommittee of the SJC Committee on Lawyer Well-Being. With record-setting verdicts in MA, Marianne’s advocacy skills draw on her commitment to making a difference both for clients and the community at large. Meet Marianne

A

Autonomous vehicles have the potential to revolutionize the transportation industry by removing humans from the equation and relying on advanced technology to operate. The development of self-driving vehicles has focused around the supposed increase of safety when human error is eliminated. However, a recent study conducted by the Insurance Institute of Highway Safety (“IIHS”) found that autonomous vehicles will likely only prevent around one-third of all crashes. 

Human driver error accounts for more than 9 out of 10 crashes according to a national survey of police-reported crashes. The face value of that data suggests that autonomous vehicles that are programed to completely eliminate human error, such as distraction or impairment, should be able to avoid 9 out of 10 crashes. However, the IIHS study shed more light on why that is likely not going to be the case and pumped the brakes on hopes of a crashless future. After analyzing data from a sample set of 5,000 crashes, the IIHS found that one-third of crashes are caused by perception or driver impairment. The study then assumed that all of these crashes would be eliminated in autonomous vehicles, relying on expectations that sensors on self-driving cars never malfunctioned and that they were never capable of impairment. 

The other two-thirds of crashes were caused by predicting, decision-making and performance errors, with speeding and illegal maneuvers making up almost 40 percent. These deliberate decisions by humans can essentially be transferred to autonomous vehicles through preferences set by the rider. Therefore, the study found that autonomous vehicles would need to be specifically programmed by the developer to prioritize safety in all situations over set rider preferences such as the speed options in Tesla’s which allow the robot to exceed the limit per the rider’s instruction. The safety programing might result in vehicles slowing dramatically, or even coming to a stop, as the safest maneuver, while a human driver might instead employ a riskier maneuver for the convenience of speed. Specifically programming autonomous vehicles to choose safety over rider preferences could drastically challenge the marketability of the cars, leaving questions as to how developers will respond to the IIHS’s findings.

As with most advents of new technology, the implementation of autonomous vehicles onto U.S. roadways has not been seamless. In 2018, an autonomous vehicle operated by Uber struck and killed a pedestrian when the system failed to predict that the pedestrian may jaywalk. There have been four other self-driving (not fully autonomous) fatalities since 2016, all involving Tesla vehicles. As more and more of these cars hit the road for testing, further safety concerns are likely to arise. 

SUGARMAN’s team of personal injury lawyers has decades of experience handling motor vehicle accident cases. If you have questions, call 617-542-1000 or email us at info@sugarman.com to connect with a principal today.