Amdahl, Gustafson, coding agents, and you

In the software operations world, if your service is successful, then eventually the load on it is going to increase to the point where you’ll need to give that services more resources. There are two strategies for increasing resources: scale up and scale out.

Scaling up means running the service on a beefier system. This works well, but you can only scale up so much before you run into limits of how large a machine you have access to. AWS has many different instance types, but there will come a time when even the largest instance type isn’t big enough for your needs.

The alternative is scaling out: instead of running your service on a bigger machine, you run your service on more machines, distributing the load across those machines. Scaling out is very effective if you are operating a stateless, shared-nothing microservice: any machine can service any request. It doesn’t work as well for services where the different machines need to access shared state, like a distributed database. A database is harder to scale out because the machines need to share state, which means they need to coordinate with each other.

Once you have to do coordination, you no longer get a linear improvement in capacity based on the number of machines: doubling the number of machines doesn’t mean you can handle double the load. This comes up in scientific computing applications, where you want to run a large computing simulation, like a climate model, on a large-scale parallel computer. You can run independent simulations very easily in parallel, but if you want to run an individual simulation more quickly, you need to break up the problem in order to distribute the work across different processors. Imagine modeling the atmosphere as a huge grid, and dividing up that grid and having different processors work on simulating different parts of the grid. You need to exchange information between processors at the grid boundaries, which introduces the need for coordination. Incidentally, this is why supercomputers have custom networking architectures, in order to try to reduce these expensive coordination costs.

In the 1960s, the American computer architect Gene Amdahl made the observation that the theoretical performance improvement you can get from a parallel computer is limited by the fraction of work that cannot be parallelized. Imagine you have a workload where 99% of the work is amenable to parallelization, but 1% of it can’t be parallelized:

Let’s say that running this workload on a single machine takes 100 hours. Now, if you ran this on an infinitely large supercomputer, the green part above would go from 99 hours to 0. But you are still left with the 1 hour of work that you can’t parallelize, which means that you are limited to 100X speedup no matter how large your supercomputer is. The upper limit on speedup based on the amount of the workload that is parallelizable is known today as Amdahl’s Law.

But there’s another law about scalability on parallel computers, and it’s called Gustafson’s Law, named for the American computer scientist John Gustafson. Gustafson observed that people don’t just use supercomputers to solve existing problems more quickly. Instead, they exploit the additional resources available in supercomputers to solve larger problems. The larger the problem, the more amenable it is to parallelization. And so Gustafson proposed scaled speedup as an alternative metric, which takes this into account. As he put it: in practice, the problem size scales with the number of processors.

And that brings us to LLM-based coding agents.

AI coding agents improve programmer productivity: they can generate working code a lot more quickly than humans can. As a consequence of this productivity increase, I think we are going to find the same result that Gustafson observed at Sandia National Labs: that people will use this productivity increase in order to do more work, rather than simply do the same amount of coding work with fewer resources. This is a direct consequence of the law of stretched systems from cognitive systems engineering: systems always get driven to their maximum capacity. If coding agents save you time, you’re going to be expected to do additional work with that newfound time. You launch that agent, and then you go off on your own to do other work, and then you context-switch back when the agent is ready for more input.

And that brings us back to Amdahl: coordination still places a hard limit on how much you can actually do. Another finding from cognitive systems engineering is that coordination costs, continually. Coordination work require requires continuous investment of effort. The path we’re on feels the work of software development is shifting from direct coding to a human coordinating with a single agent to a human coordinating work among multiple agents working in parallel. It’s possible that we will be able to fully automate this coordination work, by using agents to do the coordination. Steve Yegge’s Gas Town project is an experiment to see how far this sort of automated agent-based coordination can go. But I’m pessimistic on this front. I think that we’ll need human software engineers to coordinate coding agents for the foreseeable future. And the law of stretched systems teaches us that these multi-coding-agent systems are going to keep scaling up the number of agents until the human coordination work becomes the fundamental bottleneck.

From Rasmussen to Moylan

I hadn’t heard of James Moylan until I read a story about him in the Wall Street Journal after he passed away in December, but it turns out my gaze had fallen on one his designs almost every day of my adult life. Moylan was the designer at Ford who came up with the idea of putting an arrow next to the gas tank symbol to indicate which side of the car the tank was on. It’s called the Moylan Arrow in his honor.

Source: Wikipedia, CC BY-SA 4.0

The Moylan Arrow put me in mind of another person we lost in 2025, the safety researcher James Reason. If you’ve heard of James Reason, it’s probably because of the Swiss cheese model of accidents that Reason proposed. But Reason made other conceptual contributions to the field of safety, such as organizational accidents and resident pathogens. The contribution that inspired this post was his model of human error described in his book Human Error. The model is technically called the Generic Error-Modeling System (GEMS), but I don’t know if anybody actually refers to it by that name. And the reason GEMS came to mind was because Reason’s model was itself built on top of another researcher’s model of human performance, Jens Rasmussen’s Skills, Rules and Knowledge (SRK) model.

Rasmussen was trying to model how skilled operators perform tasks, how they process information in order to do so, and how user interfaces like control panels could better support their work. He worked at a Danish research lab focused on atomic energy, and his previous work included designing a control room for a Danish nuclear reactor, as well as studying how technicians debugged problems in electronics circuits.

The part of the SRK model that I want to talk about here is the information processing aspect. Rasmussen draws a distinction between three different types of information, which he labels signals, signs, and symbols.

The signal is the most obvious type of visual information to absorb, where there is minimal interpretation required to make sense of the signal. Consider the example of the height of mercury in a thermometer to observe the temperature. There’s a direct mapping between the visual representation of the sensor and the underlying phenomenon in the environment – a higher level of mercury means a hotter temperature.

A sign requires some background knowledge in order to interpret the visual information, but once you have internalized that information, you will be able to very quickly to interpret its meaning sign. Traffic lights are one such example: there’s no direct physical relationship between a red-colored light and the notion of “stop”, it’s an indirect association, mediated by cultural knowledge.

A symbol requires more active cognitive work in order to make sense of. To take an example from my own domain, reading the error logs emitted by a service would be an example of a task that involves visual information processing of symbols. Interpreting log error messages are much more laborious than, say, interpreting a spike in an error rate graph.

(Note: I can’t remember exactly where I got the thermometer and traffic light examples from, but I suspect it was from A Meaning Processing Approach to Cognition by John Flach and Fred Voorhost).

From his paper, Rasmussen describes signals as representing continuous variables. That being said, I propose the Moylan arrow as a great example of a signal, even though the arrow does not represent a continuous variable. Moylan’s arrow doesn’t require background knowledge to learn how to interpret it, because there’s a direct mapping between the direction the triangle is pointing and the location of the gas tank.

Rasmussen maps these three types of information processing to three types of behavior (signals relate to skill-based behavior, signs relate to rule-based behavior, and symbols relate to knowledge-based behavior). James Reason created an error taxonomy based on these different behaviors. In Reason’s terminology, slips and lapses happen at the skill-based level, rule-based mistakes happen at the rule-based level, and knowledge-based mistakes happen at the knowledge-based level.

Rasmussen’s original SRK paper is a classic of the field. Even though it’s forty years old, because the focus is on human performance and information processing, I think it’s even more relevant today than when it was originally published: thanks to open source tools like Grafana and the various observability vendors out there, there are orders of magnitude more operator dashboards being designed today than there were back in the 1980s. While we’ve gotten much better at being able to create dashboards, I don’t think my field has advanced much at being able to create effective dashboards.

Telling the wrong story

In last Sunday’s New York Times Book Review, there was an essay by Jennifer Szalai titled Hannah Arendt Is Not Your Icon. I was vaguely aware of Arendt as a public intellectual of the mid twentieth century, someone who was both philosopher and journalist. The only thing I really knew about her was that she had witnessed the trial of the Nazi official Adolph Eichmann and written a book on it, Eichmann in Jerusalem, subtitled a report on the banality of evil. Eichmann, it turned out, was not a fire-breathing monster, but a bloodless bureaucrat. He was dispassionately doing logistics work; it just so happened that his work was orchestrating the extermination of millions.

Until now, when I’d heard any reference to Arendt’s banality of evil, it had been as a notable discovery that Arendt had made as witness to the trial. And so I was surprised to read in Szalai’s essay how controversial Arendt’s ideas were when she originally published them. As Szala noted:

The Anti-Defamation League urged rabbis to denounce her from the pulpit. “Self-Hating Jewess Writes Pro-Eichmann Book” read a headline in the Intermountain Jewish News. In France, Le Nouvel Observateur published excerpts from the book and subsequently printed letters from outraged readers in a column asking, “Hannah Arendt: Est-elle nazie?”

Hannah Arendt, in turns out, had told the wrong story.

We all carry in our minds models of how the world works. We use these mental models to make sense of events that happen in the world. One of the tools we have for making sense of the world is storytelling; it’s through stories that we put events into a context that we can understand.

When we hear an effective story, we will make updates to our mental models based on its contents. But something different happens when we hear a story that is too much at odds with our worldview: we reject the story, declaring it to be obviously false. In Arendt’s case, her portrayal of Eichmann was too much of a contradiction against prevailing beliefs about the type of people who could carry out something like the Holocaust.

You can see a similar phenomenon playing out with Michael Lewis’s book Going Infinite, about the convicted crypto fraudster Sam Bankman-Fried. The reception to Lewis’s book has generally been negative, and he has been criticized for being too close to Bankman-Fried to write a clear-eyed book about him. But I think something else is at play here. I think Lewis told the wrong story.

It’s useful to compare Lewis’s book with two other recent ones about Silicon Valley executives: John Carreyrou’s Bad Blood and Sarah Wynn-Williams Careless People. Both books focus on the immorality of Silicon Valley executives (Elizabeth Holmes of Theranos in the first book, Mark Zuckerberg, Sheryl Sandberg, and Joel Kaplan of Facebook in the second). These are tales of ambition, hubris, and utter indifference to the human suffering left in their wake. Now, you could tell a similar story about Bankman-Fried. In fact, this is what Zeke Faux did in his book Number Go Up. but that’s not the story that Lewis told. Instead, Lewis told a very different kind of story. His book is more of a character study of a person with an extremely idiosyncratic view of risk. The story Lewis told about Bankman-Fried wasn’t the story that people wanted to hear. They wanted another Bad Blood, and that’s not the book he ended up writing. As a consequencee, he told the wrong story.

Telling the wrong story is a particular risk when it comes to explaining a public large-scale incidents. We’re inclined to believe that a big incident can only happen because of a big screw-up: that somebody must have done something wrong for that incident to happen. If, on the other hand, you tell a story about how the incident happened despite nobody doing anything wrong, then you are in essence telling an unbelievable story. And, by definition, people don’t believe unbelievable stories.

One example of such an incident story is the book Friendly Fire: The Accidental Shootdown of U.S. Black Hawks over Northern Iraq by Scott Snook. Here are some quotes from the Princeton University Press site for that book (emphasis mine).

On April 14, 1994, two U.S. Air Force F-15 fighters accidentally shot down two U.S. Army Black Hawk Helicopters over Northern Iraq, killing all twenty-six peacekeepers onboard. In response to this disaster the complete array of military and civilian investigative and judicial procedures ran their course. After almost two years of investigation with virtually unlimited resources, no culprit emerged, no bad guy showed himself, no smoking gun was found. This book attempts to make sense of this tragedy—a tragedy that on its surface makes no sense at all.

His conclusion is disturbing. This accident happened because, or perhaps in spite of everyone behaving just the way we would expect them to behave, just the way theory would predict. The shootdown was a normal accident in a highly reliable organization.

Snook also told the wrong story, one that subverts our usual sensemaking processes rather than supporting it: the accident makes no sense at all.

This is why I think it’s almost impossible to do an effective incident investigation for a public large-scale incident. The risk of telling the wrong story is simply too high.

Verizon outage report predictions

Yesterday, Verizon experienced a major outage. The company hasn’t released any details about how the outage happened yet, so there’s no quick takes to be had. And I have no personal experience in the telecom industry, and I’m not a network engineer, so I can’t even make any as-an-expert commentary, because I’m not nan expert. But I still thought it would be fun to make predictions about what the public write-up will reveal. I can promise that all of these predictions are free of hindsight bias!

Maintenance

My prediction: post-incident investigation of today’s Verizon outage will reveal planned maintenance as one of the contributing factors.Note: I have no actual knowledge of what happened today. This prediction is just to keep me intellectually honest.

Lorin Hochstein (@norootcause.surfingcomplexity.com) 2026-01-14T21:26:53.396Z

On Bluesky, I predicted this incident involved planned maintenance, because the last four major telecom outages I read about all involved planned maintenance. The one foremost on my mind was the Optus emergency services outage that happened back in September in Australia, where the engineers were doing software upgrades on firewalls

Work was being done to install a firewall upgrade at the Regency Park exchange in SA.
There is nothing unusual about such upgrades in a network and this was part of a planned
program, spread over six months, to upgrade eighteen firewalls. At the time this specific
project started, fifteen of the eighteen upgrades had been successfully completed. – Independent Report – The Triple Zero Outage at Optus: 18 September 2025.

The one before that was the Rogers internet outage that happened in my Canada back in July 2022.

 In the weeks leading to the day of the outage on 8 July 2022, Rogers was executing on a seven-phase process to upgrade its IP core network. The outage occurred during  the sixth phase of this upgrade process. – Assessment of Rogers Networks for Resiliency and Reliability Following the 8 July 2022 Outage – Executive Summary

There was also a major AT&T outage in 2024. From the FCC report:

On Thursday, February 22, 2024, at 2:42 AM, an AT&T Mobility employee placed a new network element into its production network during a routine night maintenance window in order to expand network functionality and capacity. The network element was misconfigured. – February 22, 2024 AT&T Mobility Network Outage REPORT AND FINDINGS

Verizon also suffered from a network outage back in September 30, 2024. Although the FCC acknowledged the outage, I couldn’t find any information from either Verizon or the FCC about the incident. The only information I was able to find about that outage comes from, of all places, a Reddit post. And it also mentions… planned maintenance!

So, we’re four for four on planned maintenance being in the mix.

I’m very happy that I did not pursue a career in network engineering: just given that the blast radius of networking changes can be very large, by the very nature of networks. It’s the ultimate example of “nobody notices your work when you do it well, they only become aware of your existence when something goes wrong. And, boy, can stuff go wrong!”

To me, networks is one of those “I can’t believe we don’t have even more outages” domains. Because, while I don’t work in this domain, I’m pretty confident that planned maintenance happens all of the time.

Saturation

The Rogers and AT&T outages involved saturation. From the Rogers executive summary (emphasis added), which I quoted in my original blog post

Rogers staff removed the Access Control List policy filter from the configuration of the distribution routers. This consequently resulted in a flood of IP routing information into the core network routers, which triggered the outage. The flood of IP routing data from the distribution routers into the core routers exceeded their capacity to process the information. The core routers crashed within minutes from the time the policy filter was removed from the distribution routers configuration. When the core network routers crashed, user traffic could no longer be routed to the appropriate destination. Consequently, services such as mobile, home phone, Internet, business wireline connectivity, and 9-1-1 calling ceased functioning.

From the FCC report on the AT&T outage:

Restoring service to commercial and residential users took several more hours as AT&T Mobility continued to observe congestion as high volumes of AT&T Mobility user devices attempted to register on the AT&T Mobility network. This forced some devices to revert back to SOS mode. For the next several hours, AT&T Mobility engineers engaged in additional actions, such as turning off access to congested systems and performing reboots to mitigate registration delays.

Saturation is such a common failure pattern in large-scale complex systems failures. We see it again and again, so often that I’m more surprised when it doesn’t show up. It might be that saturation contributed to a failure cascade, or that saturation made it more difficult to recover, but I’m predicting it’s in there somewhere.

“Somebody screwed up”

Here’s my pinned Bluesky post:

I have no information about how this incident came to be but I can confidently predict that people will blame it on greedy execs and sloppy devs, regardless of what the actual details are. And they will therefore learn nothing from the details.

Lorin Hochstein (@norootcause.surfingcomplexity.com) 2024-07-19T19:17:47.843Z

I’m going to predict that this incident will be attributed to engineers that didn’t comply with documented procedure for making the change, the kind of classic “root cause: human error” kind of stuff.

I was very critical of the Optus outage independent report for language like this:

These mistakes can only be explained by a lack of care about a critical service and a lack of disciplined adherence to procedure. Processes and controls were in place, but the correct process was not followed and actions to implement the controls were not done or not done properly.

The FCC report on the AT&T outage also makes reference to not following procedure (emphasis mine)

The Bureau finds that the extensive scope and duration of this outage was the result of
several factors, all attributable to AT&T Mobility, including a configuration error, a lack of adherence to AT&T Mobility’s internal procedures, a lack of peer review, a failure to adequately test after installation, inadequate laboratory testing, insufficient safeguards and controls to ensure approval of changes affecting the core network, a lack of controls to mitigate the effects of the outage once it began, and a variety of system issues that prolonged the outage once the configuration error had been remedied.

The Rogers independent report, to its credit, does not blame the operators for the outage. So I’m generalizing from only two data points for this prediction. I will be very happy if I’m wrong.

“Poor risk management”

This one isn’t a prediction, just an observation of a common element of two of the reports: criticizing the risk assessment of the change that triggered the incident. Here’s Optus report (emphasis in the original):

Risk was classified as ‘no impact’, meaning that there was to be no impact on network traffic, and the firewall upgrade was classified as urgent. This was the fifth mistake.

Similarly, the Rogers outage independent report blames the engineers for misclassifying the risk of the change:

Rogers classified the overall process – of which the policy filter configuration is only one of many parts – as “high risk”. However, as some earlier parts of the process were completed successfully, the risk level was reduced to “low”. This is an oversight in risk management as it took no consideration of the high-risk associated with BGP policy changes that had been implemented at the edge and affected the core.

Rogers had assessed the risk for the initial change of this seven-phased process as “High”. Subsequent changes in the series were listed as “Medium.” [redacted] was “Low” risk based on the Rogers algorithm that weighs prior success into the risk assessment value. Thus, the risk value for [redacted] was reduced to “Low” based on successful completion of prior changes.

The risk assessment rated as “Low” is not aligned with industry best practices for routing protocol configuration changes, especially when it is related to BGP routes distribution into the OSPF protocol in the IP core network. Such a configuration change should be considered as high risk and tested in the laboratory before deployment in the production network.

Unfortunately, it’s a lot easier to state”you clearly misjudged the risk!” then to ask “how did it make sense in the moment to assess the risk as low?”, and, hence, we learn nothing about how those judgments came to be.


I’m anxiously waiting to hear any more details about what happened. However, given that neither Verizon nor the FCC released any public information from the last outage, I’m not getting my hopes up.

On intuition and anxiety

Over at Aeon, there’s a thoughtful essay written by the American anesthesiologist Ronald Dworkin about how he unexpectedly began suffering from anxiety after returning to work from a long vacation. During surgeries he became plagued with doubt, where he experienced difficulty making decisions during scenarios that had never been a problem for him before.

Dworkin doesn’t characterizes his anxiety as the addition of something new to his state of being. Instead, he interprets becoming anxious as having something taken away from him, as summed up by the title of his essay: When I lost my intuition. To Dworkin, anxiety is the absence of intuition, its opposite.

To compensate for his newfound challenges in decision-making, Dworkin adopts an evidence-based strategy, but the strategy doesn’t work. He struggles with a case that involves a woman who had chewed gum before her scheduled procedure. Gum chewing increases gastric juice in the stomach, which raises the risk of choking while under anesthetic. Should he delay the procedure? He looks to medical journals for guidance, but the anesthesiology studies he finds on the effect of chewing gum were conducted in different contexts from his situation, and their results conflict with each other. This decision cannot be outsourced to previous scientific research: studies can provide context, but he must make the judgment call.

Dworkin looks to psychology for insight into the nature of intuition, so he can make sense of what he has lost. He name checks the big ideas from both academic psychology and pop psychology about intuition, including Herb Simon’s bounded rationality, Daniel Kahneman’s System 1 and System 2, Roger Sperry’s concept of analytic left-brain, intuitive right-brain, and the Myers-Briggs personality test notion of intuitive vs analytical. My personal favorite, the psychologist Gary Klein, receives only a single sentence in the essay:

In The Power of Intuition (2003), the research psychologist Gary Klein says the intuitive method can be rationally communicated to others, and enhanced through conscious effort.

In addition, Klein’s naturalistic decision-making model is not even mentioned explicitly. Instead, it’s the neuroscientist Joel Pearson’s SMILE framework that Dworkin connects with the most. SMILE stands for self-awareness, mastery, impulse control, low probability, and environment. It’s through the lens of SMILE that Dworkin makes sense of how his anxiety has robbed him of his intuition: he lost awareness of his own emotional state (self-awareness), he overestimated the likelihood of complications during surgery (low probability), and his long vacation made the hospital feel like an unfamiliar place (environment). I hadn’t heard of Pearson before this essay, but I have to admit that his website gives off the sort of celebrity-academic vibe that arouses my skepticism.

While the essay focuses on the intuition-anxiety dichotomy, Dworkin touches briefly on another dichotomy, between intuition and science. Intuition is a threat to science, because science is about logic, observation, and measurement to find truth, and intuition is not. Dworkin mentions the incompatibility of science and intuition only in passing before turning back to the theme of the role of intuition is in the work of the professional. The implication here is that professionals face different sorts of problems than scientists do. But I suspect the practice of real science involves a lot more intuition than this stereotyped view of it. I could not help thinking of the “Feynman Problem Solving Algorithm”, so named because it is attributed to the American physicist Richard Feynman.

  1. Write down the problem
  2. Think real hard
  3. Write down the solution

Intuition certainly plays a role in step 2!

Eventually, Dworkin became comfortable again making the sort of high-consequence decisions under uncertainty that are required of a practicing anesthesiologist. As he saw it, his intuition returned. And, though he still experienced some level of doubt about his decisions, he came to realize that there was never a time when his medical decisions had been completely free of doubt: that was an illusion.

In the software operations world, we are often faced with these sorts of potentially high-consequence decisions under uncertainty, especially during incident response. Fortunately for us, the stakes are lower: lives are rarely on the line in the way that they are for doctors, especially when it comes to surgical procedures. But it’s no coincidence that How Complex Systems Fail was also written by an anesthesiologist. As Dr. Richard Cook reminds us in that short paper: all practitioner actions are gambles.

The dangers of SSL certificates

Yesterday, the Bazel team at Google did not have a very Merry Boxing Day. An SSL certificate expired for https://bcr.bazel.build and https://releases.bazel.build, as shown in this screenshot from the github issue.

This expired certificate apparently broke the build workflow of users who use Bazel, who were faced with the following error message:

ERROR: Error computing the main repository mapping: Error accessing registry https://bcr.bazel.build/: Failed to fetch registry file https://bcr.bazel.build/modules/platforms/0.0.7/MODULE.bazel: PKIX path validation failed: java.security.cert.CertPathValidatorException: validity check failed

After mitigation, Xùdōng Yáng provided a brief summary of the incident on the Github ticket:

Say the words “expired SSL certificate” to any senior software engineer and watch the expression on their face. Everybody in this industry has been bitten by expired certs, including people who work at orgs that use automated certificate renewal. In fact, this very case is an example of an automated certificate renewal system that failed! From the screenshot above:

it was an auto-renewal being bricked due to some new subdomain additions, and the renewal failures didn’t send notifications for whatever reason.

The reality is that SSL certificates are a fundamentally dangerous technology, and the Bazel case is a great example of why. With SSL certificates, you usually don’t have the opportunity to build up operational experience working with them, unless something goes wrong. And things don’t go wrong that often with certificates, especially if you’re using automated cert renewal! That means when something does go wrong, you’re effectively starting from scratch to figure out how to fix it, which is not a good place to be. Once again, from that summary:

And then it took some Bazel team members who were very unfamiliar with this whole area to scramble to read documentation and secure permissions…

Now, I don’t know the specifics of the Bazel team composition: it may very well be that they have local SSL certificate expertise on the team, but those members were out-of-office because of the holiday. But even if that’s the case, with an automated set-it-and-forget-it solution, the knowledge isn’t going to spread across the team, because why would it? It just works on its own.

That is, until it stops working. And that’s the other dangerous thing about SSL certificates: the failure mode is the opposite of graceful degradation. It’s not like there’s an increasing percentage of requests that fail as you get closer to the deadline. Instead, in one minute, everything’s working just fine, and in the next minute, every http request fails. There’s no natural signal back to the operators that the SSL certificate is getting close to expiry. To make things worse, there’s no staging of the change that triggers the expiration, because the change is time, and time marches on for everyone. You can’t set the SSL certificate expiration so it kicks in at different times for different cohorts of users.

In other words, SSL certs are a technology with an expected failure mode (expiration) that absolutely maximizes blast radius (a hard failure for 100% of users), without any natural feedback to operators that the system is at imminent risk of critical failure. And with automated cert renewal, you are increasing the likelihood that the responders will not have experience with renewing certificates.

Is it any wonder that these keep biting us?

Saturation: Waymo edition

If you’ve been to San Francisco recently, you will almost certainly have noticed the Waymo robotaxis: these are driverless cars that you can hail with an app the way that you can with Uber. This past Sunday, San Francisco experienced a pretty significant power outage. One unexpected consequence of this power outage was that the Waymo robotaxis got stuck.

Today, Waymo put up a blog post about what happened, called Autonomously navigating the real world: lessons from the PG&E outage. Waymos are supposed to treat intersections with traffic lights out as four-way stops, the same way that humans do. So, what happened here? From the post (emphasis added):

While the Waymo Driver is designed to handle dark traffic signals as four-way stops, it may occasionally request a confirmation check to ensure it makes the safest choice. While we successfully traversed more than 7,000 dark signals on Saturday, the outage created a concentrated spike in these requests. This created a backlog that, in some cases, led to response delays contributing to congestion on already-overwhelmed streets.

The post doesn’t go into detail about what a confirmation check is. My interpretation based on the context is that it’s a put-a-human-in-the-loop thing, where a remote human teleoperator checks to see if it’s safe to proceed. It sounds like the workload on the human operators was just too high to process all of these confirmation checks in a timely matter. You can’t just ask your cloud provider for more human operators the way you can request more compute resources.

The failure mode that Waymo encountered is a classic example of saturation, which is a topic I’ve written about multiple times in this blog. Saturation happens when the system is not able to keep up with the load that is placed upon it. Because all systems have finite resources, saturation is an ever-present risk. And because saturation only happens under elevated load, it’s easy to miss this risk. There are many different things in your system that can run out of resources, and it can be hard to imagine the scenarios that can lead to exhaustion for each of them.

Here’s another quote from the post. Once again, emphasis mine.

We established these confirmation protocols out of an abundance of caution during our early deployment, and we are now refining them to match our current scale. While this strategy was effective during smaller outages, we are now implementing fleet-wide updates that provide the Driver with specific power outage context, allowing it to navigate more decisively.

This confirmation-check behavior was explicitly implemented in order to increase safety! It’s yet another reminder how work to increase safety can lead to novel, unanticipated failure modes. Strange things are going to happen, especially at scale.

Another way to rate incidents

Every organization I’m aware of that does incident management has some sort of severity rating system. The highest severity is often referred to as either a SEV1 or a SEV0 depending on the organization. (As is our wont, we software types love arguing about whether indexing should begin at zero or at one).

Severity can be a useful shorthand for communicating to an organization during incident response, although it’s a stickier concept than most people realize (for example, see Em Ruppe’s SRECon ’24 talk What Is Incident Severity, but a Lie Agreed Upon? and Dan Slimmon’s blog post Incident SEV scales are a waste of time). However, after the incident has been resolved, severity serves a different purpose: the higher the severity, the more attention the incident will get in the post-incident activities. I was reminded of this by John Allspaw’s Fix-mas Pep Talk, which is part of Uptime Labs’s Fix-mas Countdown, a sort of Advent of Incidents. In the short video, John argues for the value in spending time analyzing lower severity incidents, instead of only analyzing the higher severity ones.

Even if you think John’s idea a good one (and I do!), lower severity incidents happen more often than higher severity ones, and you probably don’t have the resources to analyze every single lower severity incident that comes along. And that got me thinking: what if, in addition to rating an incident by severity, we also gave each incident a separate rating on its learning potential? This would be a judgment on how much insight we think we would get if we did a post-incident analysis, which will help us decide whether we should spend the time actually doing that investigation.

Now, there’s a paradox here, because we have to make this call before we’ve done an actual post-incident investigation, which means we don’t yet know what we’ll learn! And, so often, what appears on the surface to be an uninteresting incident is actually much more complex once we start delving into the details.

However, we all have a finite number of cycles. And so, like it or not, we always have to make a judgment about which incidents we’re going to spend our engineering resources on in doing an analysis. The reason I like the idea of having a learning potential assessment is that it forces us to put initial investment into looking for those interesting threads that we could pull on. And it also makes explicit that severity and learning potential or two different concerns. And, as software engineers, we know that separation of concerns is a good idea!

Quick takes on the Triple Zero Outage at Optus – the Schott Review

On September 18, 2025, the Australian telecom company Optus experienced an incident where many users were unable to make emergency service calls from their cell phones. For almost 14 hours, about 75% of calls made to 000 (the Australian version of 911) failed to go through, when from South Australia, Western Australia, the Northern Territory, and parts of New South Wales.

The Optus Board of Directors commissioned an independent review of the incident, led by Dr. Kerry Schott. On Thursday, Optus released Dr. Schott’s report, which the press release refers to as the Schott Review. This post contains my quick thoughts on the report.

As always, I recommend that you read the report yourself first before reading this post. Note that all quotes are from the Schott Review unless indicated otherwise.

The failure mode: a brief summary

I’ll briefly recap my understanding of the failure mode, based on my reading of the report. (I’m not a network engineer, so there’s a good chance I’ll get some details wrong here).

Optus contracts with Nokia to do network operations work, and the problem occurred while Nokia network engineers were carrying out a planned software upgrade of multiple firewalls at the request of Optus.

There were eighteen firewalls that required upgrading. The first fifteen upgrades were successful: the problem occurred when upgrading the sixteenth firewall. Before performing the upgrade, the network engineer isolated the firewall so that it would not serve traffic while it was being upgraded. However, the traffic that would normally pass through this firewall was not rerouted to another active firewall. The resulting failure mode only affected emergency calls: regular phone calls that would normally traverse the offline firewall were automatically rerouted, but the emergency calls were not. As a result, 000 calls were blocked for the customers whose calls would normally traverse this firewall.

Mistakes were made

These mistakes can only be explained by a lack of care about a critical service and a lack of disciplined adherence to procedure. Processes and controls were in place, but the correct process was not followed and actions to implement the controls were not done or not done properly. – Independent Report – The Triple Zero Outage at Optus: 18 September 2025

One positive thing I’ll say about the tech industry: everybody at least pays lip service to the idea of blameless incident reviews. The Schott Review, on the other hand, does not. The review identifies ten mistakes that engineers made:

  1. Some engineers failed to attend project meetings to assess impact of planned work.
  2. The pre-work plan did not clearly include a traffic rerouting before the firewall was isolated.
  3. Nokia engineers chose the wrong method of procedure to implement the firewall upgrade change request (missing traffic rerouting).
  4. The three Nokia engineers who reviewed the chosen method of procedure did not notice the missing traffic rerouting step.
  5. The risk of the change was incorrectly classified as ‘no impact’, and the urgency of the firewall upgrade was incorrectly classified as ‘urgent’.
  6. The firewall was incorrectly classified as ‘low risk’ in the configuration management database (CMDB) asset inventory
  7. The upgrade was performed using the wrong method of procedure
  8. Both a Nokia and an Optus alert fired during the upgrade. Nokia engineers assumed the alert was a false alarm due to the upgrade. Optus engineers opened an incident and reached out to Nokia command centre to ask if it was related to the upgrade. No additional investigation was done.
  9. The network engineers who did the checks to ensure that the upgrade was successful did not notice that the call failure rates were increasing.
  10. Triple Zero call monitoring was aggregated at the national level, so it could not identify region-specific failures.

Man, if I ever end up teaching a course in incident investigations, I’m going to use this report as an example of what not to do. This is hindsight-bias-palooza, with language suffused with human error.

What’s painful to me reading this is the acute absence of curiosity about how it was that these mistakes came to happen. For example, mistakes 5 and 6 involve classifications. The explicit decision to make those classifications must have made sense to the person in the moment, otherwise they would not have made those decisions. And yet, the author conveys zero interest in that question at all.

In other cases, the mistakes are things that were not noticed. For example, mistake 4 involves three (!) reviewers not noticing an issue with a procedure, and mistake 9 involves not noticing that call failure rates were increasing. But it’s easy to point at a graph after the incident and say “this graph indicates a problem”. If you want to actually improve, you need to understand what conditions led to that not being noticed when the actual check happened. What about the way the work is done made it less likely that this would not be seen? For example, is there a UX issue in the standard dashboards that makes this hard to see? I don’t know the answer to that reading the report, and I suspect Dr. Schott doesn’t either.

What’s most damning, though, is the lack of investigation into what are labeled as mistakes 2 and 3:

The strange thing about this specific Change Request was that in all past six IX firewall upgrades in this program – one as recent as 4 September – the equipment was not isolated and no lock on any gateway was made.

For reasons that are unclear, Nokia selected a 2022 Method of Procedure which did not automatically include a traffic diversion before the gateway was locked.

How did this work actually get done? The report doesn’t say.

Now, you might say, “Hey, Lorin, you don’t know what constraints that Dr. Schott was working under. Maybe Dr. Schott couldn’t get access to the people who knew the answers to these questions?” And that would be an absolutely fair rejoinder: it would be inappropriate for me to pass judgment here without having any knowledge about how the investigation was actually done. But this is the same critique I’m leveling at this report: it passes judgment on the engineers who made decisions without actually looking at how real work normally gets done in this environment.

I also want to circle back to this line:

These mistakes can only be explained by a lack of care about a critical service and a lack of disciplined adherence to procedure. Processes and controls were in place, but the correct process was not followed and actions to implement the controls were not done or not done properly (emphasis mine).

None of the ten listed mistakes are about a process not being followed! In fact, the process was followed exactly as specified. This makes the thing labeled mistake 7 particularly egregious, because the mistake was the engineer corrrectly carrying out the selected and peer-reviewed process!

No acknowledgment of ambiguity of real work

The fact that alerts can be overlooked because they are related to ongoing equipment upgrade work is astounding, when the reason for those alerts may be unanticipated problems caused by that work.

The report calls out the Nokia and Optus engineers for not investigating the alerts that fired during the upgrade, describing this as astounding. Anybody who has done operational work can tell you that the signals that you receive are frequently ambiguous. Was this one such case? We can’t tell from the report.

In the words of the late Dr. Richard Cook:

11) Actions at the sharp end resolve all ambiguity.
Organizations are ambiguous, often intentionally, about the relationship between
production targets, efficient use of resources, economy and costs of operations, and
acceptable risks of low and high consequence accidents. All ambiguity is resolved by
actions of practitioners at the sharp end of the system. After an accident, practitioner actions may be regarded as ‘errors’ or ‘violations’ but these evaluations are heavily
biased by hindsight and ignore the other driving forces, especially production pressure.
— Richard Cook, How Complex Systems Fail

I personally find it astounding that somebody conducting an incident investigation would not delve deeper into how a decision that appears to be astounding would have made sense in the moment.

Some unrecognized ironies

There are several ironies in the report that I thought were worth calling out (emphasis mine in each case).

The firewall upgrade in this case was an IX firewall and no traffic diversion or isolation was required or performed, consistent with previous (successful) IX firewall upgrades.
Nevertheless, having expressed doubts about the procedure, it was decided by the network engineer to isolate the firewall to err on the side of caution. The problem with this decision was that the equipment was isolated, but traffic was not diverted.

Note how a judgment call intended to reduce risk actually increased it!

At the time of the incident, the only difference between the Optus network and other
telecommunication networks was the different ‘emergency time-out’ setting. This setting (timer) controls how long an emergency call remains actively attempting to connect. The emergency inactivity timer at Optus was set at 10 seconds, down from the previous 600 seconds, following regulator concerns that Google Pixel 6A devices running Android 12 were unable to reconnect to Triple Zero after failed emergency calls. Google advised users to upgrade to Android 13, which resolved the issue, but Optus also reduced their emergency inactivity timer to 10 seconds to enable faster retries after call failures.

We understand that other carriers have longer time-out sets that may range from 150 to 600 seconds. It appears that the 10 second timing setting in the Optus network was the only significant difference between Optus’ network and other Australian networks for Triple Zero behaviour. Since this incident this time setting has now been changed by Optus to 600 seconds.

One of the contributors was a timeout, which had been previously reduced from 600 seconds to 10 seconds in order to address a previous problem failed emergency calls on specific devices.

Customers should be encouraged7 to test their own devices for Triple Zero calls and, if in doubt, raise the matter with Optus.

7 This advice is contrary to Section 31 of the ECS Determination to ‘take steps to minimise non-genuine calls’. It is noted, however, that all major carriers appear to be pursuing this course of action.

In one breath, the report attributes the incident to a lack of disciplined adherence to procedure. In another, the report explicitly recommends that customers test that 000 is working, noting in passing that this is advice contradicts Australian government policy.

Why a ‘human error’ perspective is dangerous

The reason a ‘human error’ perspective like this report is dangerous is because it hides away the systemic factors that led to those errors in the first place. By identifying the problem as engineers who failed to follow the procedure or were careless (what I call sloppy devs), we learn nothing about how real work in the system actually happens. And if you don’t understand how the real work happens, you won’t understand how the incident happens.

Two bright spots

Despite these criticisms, there are two sections that I wanted to call out as positive examples, as the kind of content I’d like to see more of in these sorts of documents.

Section 4.3: Contract Management addresses a systemic issue, the relationship between Optus and a vendor of theirs, Nokia. This incident involved an interaction between the two organizations. Anyone who has been involved in an incident that involves a vendor can tell you that coordinating across organizations is always more difficult than coordinating within an organization. The report notes that Optus’s relationship with Nokia has historically been transactional, and suggests that Optus might consider whether it would see benefits moving to [a partnership] style of contract management for complex matters.

Section 5.1: A General Note on Incident Management discusses how it is more effective to have on hand a small team of trained staff who have the capacity to adapt to the circumstances as they unfold over having a large document that describes how to handle different types of emergency scenarios. If this observation gets internalized by Optus, then I think the report is actually net positive, despite my other criticisms.

What could have been

Another irony here is that Australia has a deep well of safety experts to draw from, thanks to the Griffith University Safety Science Innovation Lab. I wish a current or former associate of that lab had been contacted to do this investigation. Off the top of my head, I can name Sidney Dekker, David Provan, Drew Rae, Ben Hutchinson, and Georgina Poole. Any of them would have done a much better job.

In particular, The Schott Review is a great example of why Dekker’s The Field Guide to Understanding ‘Human Error’ remains such an important book. I presume the author of the report has never read it.

The Australian Communications and Media Authority (ACMA) is also performing an investigation into the Optus Triple Zero outage. I’m looking forward to seeing how their report compares to the Schott Review. Fingers crossed that they do a better job.

Why I don’t like “Correction of Error”

Like many companies, AWS has a defined process for reviewing incidents. They call their process Correction of Error. For example, there’s a page on Correction of Error in their Well-Architected Framework docs, and there’s an AWS blog entry titled Why you should develop a correction of error (COE).

In my previous blog post on the AWS re:Invent talk about the us-east-1 incident that happened back in October, I wrote the following:

Finally, I still grit my teeth whenever I hear the Amazonian term for their post-incident review process: Correction of Error.

On LinkedIn, Michael Fisher asked me: “Why does the name Correction of Error make you grit your teeth?” Rather than just reply in a LinkedIn comment, I thought it would be better if I wrote a blog post instead. Nothing in this post will be new to regular readers of this blog.


I hate the term “Correction of Error” because it implies that incidents occur as a result of errors. As a consequence, it suggests that the function of a post-incident review process is to identify the errors that occurred and to fix them. I think this view of incidents is wrong, and dangerously so: It limits the benefits we can get out of an incident review process.

Now, I will agree that, in just about every incident that happens, during the post-incident review work, you can identify errors that happened. This almost always involves identifying one or more bugs in code (I’ll call these software defects here). You can also identify process errors: things that people did that, in hindsight, we can recognize as having contributed to an incident. (As an aside, see Scott Snook’s book Friendly Fire: The Accidental Shootdown of U.S. Black Hawks over Northern Iraq for a case study of an incident where there were no such errors even identified. But I’ll still assume here that we can always identify errors after an incident).

So, if I agree that there are always errors involved in incidents, what’s my problem with “Correction of Error”? In short, my problem with this view is that it fundamentally misunderstands the nature the role of both software defects and human work in incidents.

More software defects than you can count

Wall Street indexes predicted nine out of the last five recessions! And its mistakes were beauties. – Paul Samuelson

Yes, you can always find software defects in the wake of an incident. The problem with attributing an incident to a software defect is that modern software systems running in production are absolutely riddled with defects. I cannot count the times I’ve read about an incident that involved a software defect that had been present for months or even years, and had gone completely undetected until conditions arose where it contributed to an outage. My claim is that there are many, many, many such defects in your system, that have yet to manifest as outages. Indeed, I think that most of this defects will never manifest as outages.

If your system is currently up (which I bet it is), and if your system currently has multiple undetected defects in it (which I also bet it does), then it cannot be the case that defects are a sufficient condition for incidents to occur. In other words, defects alone can’t explain incidents. Yes, they are part of the story of incidents, but only a part of it. By focusing solely on the defects, the errors, means that you won’t look at the systemic nature of system failures. You will not see how the existing defects interact with other behaviors in the system to enable the incident. You’re looking for errors, and unexpected interactions aren’t “errors”.

For more of my thoughts on this point, see my previous posts The problem with a root cause is that it explains too much, Component defects: RCA vs RE and Not causal chains, but interactions and adaptations.

On human work and correcting errors

Another concept that error invokes in my mind is what I called process error above. In my experience, this typically refers to either insufficient validation during development work that led to a defect making it into production, or an operational action that led to the system getting into a bad state (for example, a change to network configuration that results in services accidentally becoming inaccessible). In these cases, correction of error implies making a change to development or operational processes in order to prevent similar errors happening in the future. That sounds good, right? What’s the problem with looking at how the current process led to an error, and changing the process to prevent future errors?

There are two problems here. One problem is that there might not actually be any kind of “error” in the normal work processes, that these processes are actually successful in virtually every circumstance. Imagine if I asked, “let’s tally up the number of times the current process did not lead to an incident, versus the number of times that the current process did lead to an incident, and use that to score how effective the process is.” Correction of error implies to me that you’re looking to identify an error in the work process, it does not imply “let’s understand how the normal work actually gets done, and how it was a reasonable thing for people to typically work that way.” In fact, changing the process may actually increase the risk of future incidents! You could be adding constraints, which could potentially lead to new dangerous workarounds. What I want is a focus on understanding how the work normally gets done. Correction of error implies the focus is specifically on identifying the error and correcting it, not on understanding the nature of the work and how decisions made sense in the moment.

Now, sometimes people need to use workarounds in order to get their work done because there are constraints that are preventing them from doing the work the way they are supposed to do it, and the workaround is dangerous in some way, and that contributes to the incident. And this is an important insight to take away from an incident! But this type of workaround isn’t an error, it’s an adaptation. Correction of error to me implies changing work practices identified as erroneous. Changing work practices is typically done by adding constraints on the way work is done. And it’s these exact type of constraints that can increase risk!


Remember, errors happen every single day, but incidents don’t. Correction of error evokes the idea that incidents are caused by errors. But until you internalize that errors aren’t enough to explain incidents, you won’t understand how incidents actually happen in complex systems. And that lack of understanding will limit how much you can genuinely improve the reliability of your system.

Finally, I think that correcting errors is such a feeble goal for a post-incident process. We can get so much more out of post-incident work. Let’s set our sights higher.