Are Driver-Assisted Vehicles Safe?
Would you ride across town in a self-driving car? How about a car that partly controls your speed and steering?
Every year, AAA surveys Americans about topics like these, and every year we sound pretty worried.
For example, 86% of us say that we'd feel uncomfortable in a self-driving vehicle. 77% prefer improvements to current driver-assistance technologies rather than the development of fully autonomous vehicles.
Should we be this worried?
Well, we should be attentive at least, because driver-assisted technology is rapidly expanding. Even if you're not using the technology, other drivers are – or will be soon. Some experts project that by 2025, over 60% of new cars will have Level 2 automation (below I explain what "Level 2" means). Meanwhile, the U.S. experiences – as it has for over a decade – more than 5 million motor vehicle accidents per year, including more than 32 thousand fatalities. Driver-assisted systems are expected to lower those numbers dramatically.
The state of the data
I'd like to tell you that we know a lot about the safety of driver assistance systems (DAS), but, sadly, we don't. The real-world performance data are miserable. I think it's impossible at this point to say how worried we should be about the new and emerging technologies.
For example, this week the U.S. Department of Transportation released two reports on DAS safety. In May, AAA published a similarly-focused report. In theory, these reports should be highly informative, as both agencies tend to be careful about their methods and statistics. Unfortunately, these reports (especially the two from DoT) function best as guides for how to collect useless safety data.
At the end of this newsletter I'll summarize what we know for sure about DAS safety in real-world conditions. It won't be a very long summary. Although "driverless" cars are coming soon, the safety stats justifying their use will probably be less conclusive than we'd like.
Key terminology
There's a lot of confusion around terms used for DAS. For example, in a 2018 survey, 40% of respondents mistakenly claimed that features with names like "Autopilot" or "Pilot Assist" can operate the vehicle by themselves.
The universal system for classifying DAS was developed by SAE International (formerly the Society for Automotive Engineers). SAE identifies 6 levels of autonomy in DAS:
—Level 0 means no automation. The driver does everything, although the DAS may provide alerts, as well as temporary actions such as automatic emergency braking.
—Level 1 means that the DAS provides some assistance with driving. The most common examples are adaptive cruise control (which maintains a safe distance between you and vehicles in front of you) or steering assistance (e.g., lane-centering).
—Level 2, or "advanced driving assistance systems", provide partial automation. The vehicle can independently steer, accelerate, and brake, although a human driver is expected to monitor the technology and the environment at all times, and take control whenever needed. Examples of Level 2 DAS include Highway Driving Assist (Genesis, Hyundai, Kia), EyeSight (Subaru), Autopilot (Tesla), Pilot Assist (Volvo), Super Cruise (Cadillac), Driver Assistance (Mercedes-Benz), and Traffic Jam Assist (Audi).
—Levels 3 through 5 are "automated driving assistance systems", meaning that the vehicle can actually drive itself. A Level 3 DAS will sometimes require the driver to take control. Levels 4 and 5 require no driver, the difference being that Level 5 vehicles have no restrictions on where and when they can operate.
All vehicles currently sold in the U.S. are Levels 0, 1, or 2. (A small number of Level 3 cars, with drivers, are currently allowed on U.S. roads for testing purposes; they're also available in Japan and Germany, and will be soon in other European countries.) Some experts predict that Level 3 cars will be available to the American public as early as 2025. As for Levels 4 and 5, more than 80 companies are currently working toward the development of these vehicles. So… buckle up. The future is rapidly approaching.
Safety data: NHTSA reports
This week the Department of Transportation's National Highway Transportation Administration (NHTSA) released two safety reports, one for "advanced" DAS (Level 2), and the other for "automated" DAS (Levels 3-5). Each report contains summaries of crash data available from July 2021 through May 2022.
NHTSA noted that prior to these reports, crash data for Level 2 and higher vehicles were "limited and generally inconsistent." However, these new reports show that the quality of the data hasn't improved.
For example, for Level 2 vehicles, 392 crashes were recorded during this 10-month period. Nothing reliable can be said about the severity of the crashes, because the extent of injuries was unknown for exactly 75% of them. Nor can anything be said about what caused the crashes – there's exactly zero data available on that. Same goes for the 130 Level 3-5 crashes. These reports tell us almost nothing other than the number of crashes per month, per state, and per reporting entity. I found this very disappointing.
To the NHTSA's credit, they did a excellent job of describing the limitations of the data they received. For example:
1. Not all data from each crash was complete and verified.
This is critical. For example, vehicle make, model, and year was sometimes used to infer Level 2 features, but that that's unreliable, because some drivers may not have activated the features. Data may be available from the DAS devices, but reporting agencies would need to obtain and report that data.
2. Some of the crashes were reported more than once.
The NHTSA gathered data through a Standing General Order that legally requires auto manufacturers and reporting agencies to provide crash data. With so many different players, some redundancy was inevitable.
"Big data" approaches are often characterized by the 3 V's: Volume (lots of data), velocity (quick access to data), and variety (different kinds and sources of data). The third V is the biggest problem for NHTSA's data, because reliance on multiple sources resulted in spotty, redundant reportage. And, there's the problem of greatest concern to you, the driver/passenger/pedestrian:
3. The data are not normalized. As NHTSA puts it, "reporting entities are not required to submit information regarding the number of vehicles they have manufactured, the number of vehicles they are operating, or the distances traveled by those vehicles." Thus, we have no context for interpreting the number of crashes. Is 392 Level 2 crashes over a 10-month period a lot? We have no way of knowing. What we would need to know are the numbers of Levels 0, 1, and 2 vehicles on the road, how many miles each have traveled, how many crashes each have experienced, and whether Level 2 technologies were being used properly at the time of each crash.
So, NHTSA has "big data", but it's much too superficial to be of use, except as grounds for the agency to insist on more detail from reporting agencies.
Safety data: AAA report
Last month, AAA released a safety report on what it calls "active" DAS, or Level 2 systems. Their methodology was almost opposite of the NHTSA's big data approach: AAA simply took three vehicles and ran them through a carefully-constructed series of tests.
The vehicles were a 2021 Hyundai Santa Fe (with "Highway Driving Assist"), a 2021 Subaru Forester (with "EyeSight"), and a 2020 Tesla Model 3 (with "Autopilot").
Testing took place on 0.7 miles of a street that was closed to the public. The focus was on how each vehicle performed when it was on the verge of a collision. Four potential collisions conditions were created:
Condition 1: A slow-moving car ahead of the test vehicle (in the same lane, heading the same direction).
Condition 2: Same as above, only the slow-moving vehicle was a bicycle.
Condition 3: An oncoming car ahead of the test vehicle (in the same lane, making a head-on collision imminent).
Condition 4: A bicycle crossing the path of the test vehicle.
The car and cyclist were sophisticated, externally-controlled robots. (In the Appendix, I include further details on each collision condition.)
Each vehicle was tested in each condition 5 times. For Conditions 1 and 2, all three vehicles performed perfectly across the 5 trials: they began to brake at a suitable time, they decelerated comfortably to match the speed of the vehicle ahead of them, and they maintained a safe distance. (The Subaru detected the vehicle latest and thus braked more aggressively, but AAA considered the extent of "jerking" to be minimal.)
For Condition 3, the Hyundai and the Subaru failed to detect the oncoming car in all 5 trials. Thus, they failed to brake and collided with the car all 5 times. The Tesla did detect the car and brake, but it was still unable to avoid collision each time. In short, none of these DAS could prevent head-on collisions.
For Condition 4, the Hyundai and the Tesla detected the bicycle, braked, and avoided impact in all 5 trials. The Subaru consistently failed to detect the bicycle or brake for it, and thus collided with it on every trial.
The AAA data are useful, although like most "small data", our ability to generalize is limited. We can't be sure whether the same results would be found at different absolute and relative speeds, with different vehicles (trucks, motorcycles etc.), with different trajectories of vehicles relative to each other, under different conditions of visibility, and so on.
Although it's impossible to study all the relevant variables, that's not always a problem. A DAS that performs well at speeds of 20 and 40 mph would almost surely perform equally well at intervening speeds. A DAS that performs exactly the same way under the same conditions 5 times in a row would almost surely continue to do so. In these two examples, interpolation and extrapolation, respectively, allow us to draw conclusions beyond the existing data.
In other cases, more caution is needed. In the AAA study, all three vehicles responded safely to a slower-moving car in front of them, while failing in every case to avoid a collision with an oncoming car. Here, interpolation won't work. We don't know what would happen if the slower-moving car had been moving even more slowly, or if it had stalled. Extrapolation fails too – we don't know what would happen if the oncoming car weren't directly in front of the DAS vehicle, but rather approaching it at, say, a 15 degree angle.
In spite of these limitations, AAA drew some useful conclusions from their findings:
1. Constant driver supervision is necessary for current Level 2 systems.
2. Current Level 2 systems perform most poorly under unusual conditions (i.e., Conditions 3 and 4).
3. Camera-based driver monitoring systems are needed to encourage driver engagement.
These conclusions can help promote driver safety, although they only provide minimal information regarding our original question about the safety of DAS.
Is there any credible real-world data on DAS safety?
Not really. For example, insurance companies are fond of noting that driverless cars have an accident rate of 9.1 per million miles driven, whereas the rate for cars with human drivers is 4.1 accidents per million miles. However, this doesn't tell us much, because these data are from 2015, which is ancient history given how rapidly DAS are evolving. Also, the data indicated that accidents involving self-driving vehicles were less severe, and that DAS technology itself was never to blame.
DAS safety is often judged in the court of public opinion – e.g., by the media after a crash. This anecdotal evidence helps us fix problems with DAS without telling us how safe they are. For example, in 2016, the Autopilot system on a Tesla Model S failed to discern a white 18-wheel truck and trailer crossing the highway against a bright sky. The Tesla attempted, unsuccessfully, to drive through the trailer, and the driver was killed. The media made much of this tragic story, but, in itself, it doesn't tell us how safe, generally speaking, Tesla's Autopilot function might be. What it told us is that Autopilot needed better visual sensors. (An NTSHA investigation also revealed that the driver was relying more heavily than intended on Autopilot.)
Summary: What have we learned?
Statistically speaking, we don't have good data on DAS safety in real-world conditions. This is because DAS technology is diverse and rapidly changing, and because the causes of accidents are often complicated. Here's all we can say with confidence so far:
1. Current DAS technology, without driver support, is deeply fallible.
We know this from the AAA report, from other reports, and from case studies (many of which feature Tesla vehicles). The technology makes mistakes. Level 2 DAS does not make a car self-drivable.
What we don't know, however, is whether DAS technologies make vehicles safer on the whole. Yes, Autopilot may cause your vehicle to plow into a cyclist, but drivers do that anyway on occasion because they're inattentive, sleepy, and/or intoxicated. Level 2 features might prevent that. It's unclear how accident rates would change if, beginning tomorrow, all commercial vehicles had Level 2 features that were actively used.
2. Current driver support for DAS technology is insufficient.
Experts often note that drivers become complacent or inattentive when DAS technology is in use. In some ways this may be an intractable problem – after all, it seems to defeat the purpose of a DAS if you have to pay just as much attention as you did before it existed. At the time time, part of the problem is that some drivers genuinely overestimate what the technology can do.
This problem is exacerbated by the way the products are marketed. Tesla, for instance, uses misleading names for its Level 2 DAS (e.g., "Autopilot") and makes misleading references to its "self-driving" systems (at present, all commercially available Tesla DAS are Level 2 and thus require human monitoring at all times). A 2016 blog post by Tesla, still available on its website, proclaims that all of its cars are now being produced with "fully self-driving hardware", but fails to clearly explain that not all of these features are enabled, since fully self-driving vehicles could not be legally sold then (and still cannot be, except for testing purposes).
As of this writing, the NHTSA is investigating several accidents involving Teslas that were using Autopilot. Meanwhile, as I mentioned earlier, 40% of American adults think that "Autopilot" and similarly-named DAS imply self-driving capability. In short, we need clearer marketing of Level 2 features, better instructions from manufacturers, and more engagement on the part of drivers who use them.
3. We don't know what we don't know.
As I've said, there's no good data on the relative safety of DAS under real-world conditions.
On the one hand, DAS technology could outperform unassisted drivers. This is the hoped-for outcome. DAS don't get distracted, sleepy, or intoxicated. They don't take risks, they can take in more comprehensive information about road conditions, and they can be faster at processing that information.
On the other hand, DAS guidance could turn out to be more hazardous than unassisted drivers owing to technological fallibility and, especially for Levels 2 through 4, driver inattention or error.
In my opinion, the future holds both outcomes. Given the current rate of progress in DAS technologies, the incidence of motor vehicle accidents in the U.S. should be in decline by the end of this decade. As Level 4 and 5 vehicles become available, elderly, ill, and otherwise impaired drivers will increasingly benefit from transportation options they don't currently have. And, DAS at Level 3 and above will operate vehicles so efficiently that traffic congestion may be lessened. At the same time, I suspect that a Tesla will still occasionally drive into a truck. This is tragic for the individuals involved, and will cause us to ask – as we ask now – whether the overall benefits of automated vehicles justify the occasional mishap. Hopefully, by then, public debate over the topic can be informed by better statistics.
Appendix: AAA May 2022 testing conditions: Further details
A closer look at the AAA testing conditions suggests that the DAS systems failed under conditions where most unimpaired human drivers could've successfully avoided a collision.
Condition 1: A slow-moving car ahead of the test vehicle (in the same lane, heading the same direction).
The vehicle in front accelerated to 20 mph and traveled straight ahead at this speed for at least 10 seconds at a minimum of 1700 feet ahead of the test vehicle. The test vehicle then accelerated to 55 mph, at which time the DAS was engaged.
Condition 2: Same as Condition, only the slow-moving vehicle was a bicycle.
The bicycle moved at a constant speed of 15 mph along a straight line 1 foot to the left of the white lane marker on the right side of the travel lane. The test vehicle was accelerated to 45 mph within the lane, at which time the DAS was engaged. At the time of engagement, the test vehicle and cyclist were a minimum of 500 feet apart.
Condition 3: An oncoming car ahead of the test vehicle (in the same lane, meaning that a head-on collision was imminent).
The oncoming vehicle was traveling at a constant speed of 15 mph, oriented so that its lateral centerline was directly over the dashed white line separating the two travel lanes. The test vehicle, moving in a straight line in the center of one lane, was accelerated to 25 mph, at which time the DAS was engaged. At the time of engagement, a minimum of 1000 feet separated the vehicles.
Condition 4: A bicycle crossing the path of the test vehicle.
The center of the bicycle was placed 100 feet to the right of the centerline of the travel lane. The test vehicle was accelerated to 25 mph within the lane, at which time the DAS was engaged. At the time of engagement, a minimum of 700 feet separated the test vehicle and the bicycle. The bicycle accelerated to 9.4 mph within 1.5 seconds once the test vehicle was within 290 feet of it.
As I noted earlier, Conditions 3 and 4 are those in which at least two of the vehicles failed to avoid a collision on all 5 trials. And yet, if you consider the details of these conditions, one would expect an attentive, unimpaired human to be able to swerve or stop in time to avoid a collision in most situations.