Where the AI Rubber Hits the NHTSA Road: Letter to GM Describes State of Autonomy

The U.S. National Highway Traffic Safety Administration (NHTSA) has released a letter in response to a GM query regarding use of warning lights in self-driving vehicles that provides interesting details on the state of the GM autonomous vehicle program and highlights some of the challenges in this critical area of AI. Autonomous vehicles are almost literally “where the rubber hits the road,” for recent developments in AI and Machine Learning, so these details point to the larger issues confronting all autonomous systems, and the regulatory issues that they expose.

Following are key points of the letter:

You state that GM is developing a new adaptive cruise control system with lane following (which GM has referred to as Super Cruise ) that controls steering, braking, and acceleration in certain freeway environments. When Super Cruise is in use, the driver must always remain attentive to the road, supervise Super Cruise’s performance, and be ready to steer and brake at all times. In some situations, Super Cruise will alert the driver to resume steering for example, when the system detects a limit or fault. If the driver is unable or unwilling to take control of the wheel (if, for example, the driver is incapacitated or unresponsive), Super Cruise may determine that the safest thing to do is to bring the vehicle slowly to a stop in or near the roadway, and the vehicle’s brakes will hold the vehicle until overridden by the driver.

You indicate that GM plans to develop Super Cruise so that, in this situation, once Super Cruise has brought the vehicle to a stop, the vehicle’s automated system will activate the vehicle’s hazard lights. You state that you believe that this automatic activation of the hazard lights complies with the requirements of FMVSS No. 108 for several reasons….

GM states that in the event that a human driver fails to respond to Super Cruise’s request that the human retake control of the vehicle, and Super Cruise consequently determines that the safest thing to do is to bring the vehicle slowly to a stop in or near the roadway, Super Cruise-equipped vehicles will activate the vehicle’s hazard lights automatically once the vehicle is stopped….

We note that GM indicates that when the driver is unable or unwilling to take control of the vehicle the system will bring the vehicle to a stop in or near the roadway. A vehicle system that stops a vehicle directly in a roadway might, depending on the circumstances, be considered to contain a safety-related defect–i.e., it may present an unreasonable risk of an accident occurring or of death and injury in an accident. Federal law requires the recall of a vehicle that contains a safety-related defect. We urge GM to fully consider the likely operation of the system it is contemplating and ensure that it will not present such a risk.

This letter addresses concerns that autonomous vehicles are not yet ready for unsupervised operation, as indicated in recent incidents such as the June Tesla Autopilot crash.

While the description of GM’s Super Cruise system is illuminating, the letter also draws attention to the innumerable points that need to be considered as autonomous systems become a part of everyday reality.

In an April, 2015 letter to the California Department of Motor Vehicles, the NHTSA described its 24 month research program into 10 areas of autonomous vehicle operation:

Current Research Questions
1. How can we retain driver’s attention on the driving task for highly automated systems that are only partially self driving and thus require a driver to cycle in and out of an automated driving mode during a driving trip?
2. For highly automated systems that envision allowing the driver to detach from the driving task, but safely resume with a reasonable lead time.
3. What types of driver misuse/abuse can occur?
4. What are the incremental driver training needs for each level of automation?
5. What functionally safe design strategies can be implemented for automated vehicle functions?
6. What level of cybersecurity is appropriate for automated vehicle functions?
7. What is the performance of Artificial Intelligence (AI) in different driving scenarios, particularly those situations where the vehicle would have to make crash avoidance decisions with imperfect information?
8. Are there appropriate minimum system performance requirements for automated vehicle systems?
9. What objective tests or other certification procedures are appropriate?
10. What are the potential incremental safety benefits for automated vehicle functions/concepts?

These research points provide useful guidance for companies developing or employing autonomous systems, as well as pointing to areas in which regulation is likely to occur.

As society fits AI-driven autonomy into both consumer and work environments, a broad spectrum analysis of impact is becoming increasingly urgent. Each case requires different treatment, but moving ahead without understanding the full range of interactions across operation, society, and security is likely to be perilous, indeed.

A detailed report on NHTSA policy on autonomous vehicles can be found here: Federal Automated Vehicles Policy: Accelerating the Next Revolution In Roadway Safety.


Leave a Reply

Your email address will not be published. Required fields are marked *