Autonomous Vehicles and Machine Learning: A Failed Marriage
Back in August, the autonomous vehicle (AV) companies Cruise and Waymo were granted license to operate their driverless taxi services in the city of San Francisco. Though Waymo has been operating relatively free of conflict, Cruise’s operations have been plagued by a string of “minor” incidents. Their various offenses span from causing gridlock to colliding with emergency vehicles to driving into wet pavement and getting stuck.
Last week, Cruise had their driverless permits suspended following its involvement in a hit-and-run incident during which a human-operated vehicle knocked a pedestrian into the path of one of Cruise’s driverless taxis. The AV braked aggressively but did not manage to fully prevent colliding with the pedestrian. But that wasn’t the main issue here—a human driver likely would not have responded any faster to the scenario. The problem was that, detecting it had been involved in a collision, the AV then elected to pull over while neglecting to take into account the fact that the pedestrian was now beneath the vehicle. The AV proceeded to drag the pedestrian about 20 ft, pinning them beneath the vehicle. This is assuredly an error that a human driver would not have made.
In an announcement on their Twitter account, Cruise stated “Our teams are currently doing an analysis to identify potential enhancements to the AV’s response to this kind of extremely rare event”. Given the rarity of the scenario and the AV’s reliance on machine learning techniques, it makes complete sense that the AV was not able to respond appropriately to the situation. Machine learning approaches rely on training data sets built up of past examples. Rare cases will be underrepresented or even completely absent from the training data set due to the simple fact of their rarity. Some uncommon events can potentially be manually accounted for by human developers, but it is simply not possible to account for all events that could occur. An event can take place on the road that has never happened in all of human history, and the machine will definitely not have this in its training set, nor will the responsible developers always have the foresight to have accounted for such possibilities ahead of time.
Simply put, the tool that we are using is not suited to the task. Machine learning techniques can only be successful insofar as they are operating in an environment with a finite possibility space that can be represented by training data sets. Though driving is for the most part a predictable task, it is occurring in a dynamic multi-actor environment and there always exists the possibility that a never-before-seen event will arise. Machine learning is about the worst way you could go about control in a dynamic environment. A bird observes another object in its space: if the angle between it and the other object remains constant, they are on a collision course. To make this calculation, it is using its dynamic memory in a dynamic environment. But what we've done in the case of AVs is build things with no dynamic memory. Clever.
So are self-driving cars a dead end? Not necessarily, but as long as the bulk of their operations rely upon machine learning techniques, mishaps will occur. The only solution to the driverless car problem (aside from having a human driver present to take over, which to some extent defeats the point) is to develop an AI that can think on its feet and reason about events without requiring prior experience of them. It will need to be built atop a foundation of true comprehension paired with the ability to reason about the consequences of its possible actions. An entirely new approach to AI is needed to make this a reality, and its undertaking is no small feat.