Will First Fatality Affect the Development of Self-Driving Cars?
Self-driving or 'automated vehicle safety' technology holds enormous implications for reducing traffic crashes, and thus fatalities and injuries. An estimated 35,200 people lost their lives year on American roads last year, up 7.7 percent from 2014.
"Ninety-four percent of crashes can be tied back to a human choice or error, so we know we need to focus our efforts on improving human behavior while promoting vehicle technology that not only protects people in crashes, but helps prevent crashes in the first place," said National Highway Traffic Safety Administration Administrator Dr. Mark Rosekind in a July 1 press release. [Italics added.]
Details of the May 7 crash were made available by the National Highway Traffic Safety Administration (NHTSA).
"In a statement, [NHTSA] said preliminary reports indicated that the crash occurred when a tractor-trailer made a left turn in front of the Tesla, and the car failed to apply the brakes," reported Neal E. Boudette and Bill Vlasic for The New York Times on June 30.
It is the first known fatal accident involving a vehicle being driven by itself by means of sophisticated computer software, sensors, cameras and radar.
The Silicon Valley-based electric car company's driverless technology is known as Autopilot. "Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied," Tesla states in their June 30 blog. However, that would not account for the failure of the radar to spot the trailer.
A Tesla equipped with Autopilot makes the Model S a "semi-autonomous" car, according to Gear Patrol staff writer Andrew Connor, as opposed to the fully autonomous technology that Google and Uber are working towards. Specifically, Tesla Autopilot is a "Level 2" technology, adds Connor, while Volvo Intellisafe Autopilot is Level 3, and Google Cars Level 4.
"NHTSA defines vehicle automation as having five levels," according to their May 2013 press release. Their table below is similar, but not identical to the one used by Connor.
- No-Automation (Level 0): The driver is in complete and sole control of the primary vehicle controls – brake, steering, throttle, and motive power – at all times.
- Function-specific Automation (Level 1): Automation at this level involves one or more specific control functions. Examples include electronic stability control or pre-charged brakes, where the vehicle automatically assists with braking to enable the driver to regain control of the vehicle or stop faster than possible by acting alone.
- Combined Function Automation (Level 2): This level involves automation of at least two primary control functions designed to work in unison to relieve the driver of control of those functions. An example of combined functions enabling a Level 2 system is adaptive cruise control in combination with lane centering.
- Limited Self-Driving Automation (Level 3): Vehicles at this level of automation enable the driver to cede full control of all safety-critical functions under certain traffic or environmental conditions and in those conditions to rely heavily on the vehicle to monitor for changes in those conditions requiring transition back to driver control. The driver is expected to be available for occasional control, but with sufficiently comfortable transition time. The Google car is an example of limited self-driving automation.
- Full Self-Driving Automation (Level 4): The vehicle is designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip. Such a design anticipates that the driver will provide destination or navigation input, but is not expected to be available for control at any time during the trip. This includes both occupied and unoccupied vehicles.
According to Tesla's Autopilot software description, "Tesla requires drivers to remain engaged and aware when Autosteer is enabled. Drivers must keep their hands on the steering wheel." It adds that it warns drivers about side collisions, not in front of them, which is what occurred on May 7. However, on the company's homepage, it states that Autopilot "helps avoid collisions from the front and sides."
The divergent approaches reflect companies with different goals and business strategies. Tesla’s rapid-fire approach is in line with its image as a small but significant auto industry disruptor, while Google — a tech company from whom no one expects auto products — has the luxury of time.
The crash hasn't deterred Tesla or their devoted clientele using Autopilot who "view themselves as part of the test process for the autopilot feature – which the Palo Alto company describes as still being in beta mode," write Natalie Kitroeff and Samantha Masunaga for the Los Angeles Times.
A prescient June 24 Planetizen post, five days before the details of the May 7 crash were made public, lists five principles made by the National Association of City Transportation Officials (NACTO) to make autonomous vehicles urban-friendly. The first bears repeating here:
- Plan for fully automated vehicles, not half-measures: Going halfway with partially automated vehicles, instead of fully automated, would require drivers to take over if the vehicle encounters a dangerous situation. In practice, such vehicles have been shown to encourage unsafe driving behavior, with drivers reading more, texting more, and generally being inattentive while the vehicle is in motion.
Read NACTO’s full policy statement on automated vehicles (PDF).