Saturation: Waymo edition

If you’ve been to San Francisco recently, you will almost certainly have noticed the Waymo robotaxis: these are driverless cars that you can hail with an app the way that you can with Uber. This past Sunday, San Francisco experienced a pretty significant power outage. One unexpected consequence of this power outage was that the Waymo robotaxis got stuck.

Today, Waymo put up a blog post about what happened, called Autonomously navigating the real world: lessons from the PG&E outage. Waymos are supposed to treat intersections with traffic lights out as four-way stops, the same way that humans do. So, what happened here? From the post (emphasis added):

While the Waymo Driver is designed to handle dark traffic signals as four-way stops, it may occasionally request a confirmation check to ensure it makes the safest choice. While we successfully traversed more than 7,000 dark signals on Saturday, the outage created a concentrated spike in these requests. This created a backlog that, in some cases, led to response delays contributing to congestion on already-overwhelmed streets.

The post doesn’t go into detail about what a confirmation check is. My interpretation based on the context is that it’s a put-a-human-in-the-loop thing, where a remote human teleoperator checks to see if it’s safe to proceed. It sounds like the workload on the human operators was just too high to process all of these confirmation checks in a timely matter. You can’t just ask your cloud provider for more human operators the way you can request more compute resources.

The failure mode that Waymo encountered is a classic example of saturation, which is a topic I’ve written about multiple times in this blog. Saturation happens when the system is not able to keep up with the load that is placed upon it. Because all systems have finite resources, saturation is an ever-present risk. And because saturation only happens under elevated load, it’s easy to miss this risk. There are many different things in your system that can run out of resources, and it can be hard to imagine the scenarios that can lead to exhaustion for each of them.

Here’s another quote from the post. Once again, emphasis mine.

We established these confirmation protocols out of an abundance of caution during our early deployment, and we are now refining them to match our current scale. While this strategy was effective during smaller outages, we are now implementing fleet-wide updates that provide the Driver with specific power outage context, allowing it to navigate more decisively.

This confirmation-check behavior was explicitly implemented in order to increase safety! It’s yet another reminder how work to increase safety can lead to novel, unanticipated failure modes. Strange things are going to happen, especially at scale.

Leave a comment