Federal Policy on Self-Driving Cars Focuses on Safety Issues
September 22, 2016
The newly released Federal Automated Vehicles Policy reveals that the Obama administration is largely taking a hands-off approach to the technologies used to create autonomous vehicles, instead zeroing in on safety. In fact, the majority of the 116-page policy addresses safety issues, with the goal of preventing accidents such as the recent fatal crash of a Tesla vehicle on autopilot. The Self-Driving Coalition for Safer Streets — including Alphabet, Ford Motor Company, Uber, Lyft and Volvo — gave the policy a thumbs-up.
The New York Times notes that auto manufacturers have 60 days to register comments with the Transportation Department on the 15-point “safety assessment” policy. Whether it becomes formal policy remains unclear, especially since a new administration will be in the White House when the comment period ends.
National Highway Traffic Safety Administration (NHTSA) head Mark Rosekind said the policy is meant to “create a path for a fully autonomous driver with different designs than what we have on the road today.” Audi of America director of government affairs Brad Stertz approves the open-ended language, noting, “you can’t get locked into one technology or approach, and it doesn’t seem like they are doing that here.”
According to The Wall Street Journal, “U.S. officials are hoping to spur companies to share data on crashes, detail their latest systems to regulators and take steps to ensure technologies are traffic-ready.”
The policy recommendations also divvy up regulatory responsibility between the federal and state governments. “The U.S. guidelines suggest states retain prominence over driver’s licenses, car registrations, traffic laws, insurance and legal liabilities,” but say that safety standards should be left to federal officials.
The New York Times outlines the 15-point safety checklist, which among other issues addresses data sharing, privacy, system safety, digital security (regarding online hacks), and a human-machine interface that is able to “safely switch between autopilot and human control.” Crashworthiness means driverless cars must “meet the National Highway Traffic Safety Administration’s regular standards,” and consumer education dictates the communication of safety issues regarding autopilot and other limitations of autonomous vehicles, certification of any software updates or new driverless features by the NHTSA.
The policy also calls for “post-crash behavior” that proves that cars are safe to use again after a crash. With regard to laws and practices, the vehicle “should follow various state and local laws and practices that apply to drivers.”
Ethical considerations covers “the way a car is programmed,” which must be “clearly disclosed to the NHTSA.” Manufacturers have to prove that “their vehicles have been tested and validated to fulfill” how they are described, and detection and response refers to how the car responds to other cars, pedestrians and animals and responds to “big surprises and crashes.”
Fallback covers how the car switches from automated driving to human control, taking into consideration “the condition of the driver” and if he is “under the influence of alcohol or drowsy and unable to take control safely.” Finally, automakers must “develop testing and validation methods,” including simulation, test track and on-road testing for “the wide range of technologies used in driverless cars.”
No Comments Yet
You can be the first to comment!
Sorry, comments for this entry are closed at this time.