original version of this story Appeared in Quanta Magazine.
driverless cars and Airplanes are no longer a thing of the future. In the city of San Francisco alone, two taxi companies have logged his combined 8 million self-driving miles through August 2023. Also, in the United States he has more than 850,000 autonomous aircraft, or drones, registered, but does not include those owned by the military.
However, there are legitimate concerns about safety. For example, the National Highway Traffic Safety Administration reported that in the 10 months ending in May 2022, there were nearly 400 crashes involving cars using some form of autonomous control. These accidents left six people dead and five seriously injured.
The usual way to deal with this problem (sometimes called “exhaustion testing”) is to test these systems until they are proven to be safe. However, we do not know if this process will reveal all potential flaws. “People run tests until they run out of resources and patience,” said Sayan Mitra, a computer scientist at the University of Illinois at Urbana-Champaign. However, testing alone cannot guarantee.
Mitra and his colleagues can do that. His team successfully demonstrated the safety of lane-tracking capabilities for cars and landing systems for autonomous aircraft. Their strategy is currently being used to help land drones on aircraft carriers, and Boeing plans to test it on an experimental aircraft this year. “Their method of providing end-to-end safety assurance is extremely important,” said Corina Passareanu, a research scientist at Carnegie Mellon University and her NASA Ames Research Center.
Their work includes guaranteeing the results of machine learning algorithms used to inform self-driving cars. Broadly speaking, many self-driving cars have two components: a perception system and a control system. Perception systems tell, for example, how far a car is from the center of its lane, what direction a plane is headed, and what its angle is relative to the horizon. The system works by feeding raw data from cameras and other sensory tools into machine learning algorithms based on neural networks that recreate the environment outside the vehicle.
These evaluations are sent to another system, the control module, which decides what to do. For example, if an obstacle is approaching, decide whether to brake or avoid it. Luca Carlone, an associate professor at the Massachusetts Institute of Technology, said that although the control module relies on established technology, “it makes decisions based on recognition results, and there is no guarantee that the results are correct.” .
To ensure safety, Mitra's team worked to ensure the reliability of the vehicle's recognition system. They first thought that safety could be guaranteed if a perfect rendering of the outside world was available. Next, we determined how much error the perceptual system introduces into the reproduction of the vehicle's surroundings.
The key to this strategy is quantifying the uncertainty involved, known as the error band, or in Mitra's words, the “known unknowns.” This calculation is based on what he and his team call a recognition contract. In software engineering, a contract is a promise that for a given input to a computer program, the output will fall within a specified range. Understanding this range is not easy. How accurate are the car's sensors? How much fog, rain, and sun glare can a drone withstand? But how much fog, rain, and sun glare can a drone endure? Mitra's team has proven that if the range determination is accurate enough, it can ensure vehicle safety.