You’ll never see attrition referenced in an RCA

In the wake of the recent AWS us-east-1 outage, I saw speculation online about how the departure of experienced engineers played a role in the outage. The most notable one was from the acerbic cloud economist Corey Quinn, in a column he wrote for The Register: Amazon brain drain finally sent AWS down the spout. Amazon’s recent announcement that it will be laying off about 14,000 employees, which includes cuts to AWS, has added fuel to that fire, as I saw in a LinkedIn post by Java luminary and former AWS-er James Gosling that referenced another speculative column on the subject Amazon Just Proved AI Isn’t The Answer Yet Again. I’m not going to comment on the accuracy of these assessments, or more broadly the role that attrition played on this particular incident, because I don’t have any special knowledge here. Instead, I want to use this as an opportunity to talk about the relationship between attrition and incidents, and how that relationship is captured in incident write-ups, both public and internal.

In a public incident write-up, or an RCA provided by a vendor to a customer, you’re never going to see any discussion of the role of attrition. This is because, as noted by John Allspaw in his post What makes public posts about incidents different from analysis write-ups, the purpose of a public write-up is to reassure the audience that the problem that caused the incident is being addressed. This means that the write-up will focus on describing a technical problem and alluding to the technical solution that is being addressed to fix the problem. Attrition isn’t a technical problem, it’s a completely different type of phenomenon. And, as we’ve seen with the recent Amazon layoff announcement, attrition is sometimes an explicit business decision. If a company like Amazon mentioned attrition in a public write-up, it would be much more difficult to answer a question like “how will your upcoming layoff increase the risk of incidents?” There’s no plausible deniability (“it won’t increase the risk of incidents”) if you’ve previously talked about attrition in a public write-up. Because talking about attrition doesn’t fulfill the confidence-building role of the write-up, it’s not going to ever find its way into a document intended for outsiders.

Internal incident write-ups serve a different purpose, and so they don’t have this problem. Indeed, in my own career, I have seen references to the departure of expertise in internal incident write-ups. The first example that comes to mind is the hot potato scenario where there’s a critical service where the original authors are no longer at the company, and the team that originally owned it no longer exists, and so another team becomes responsible for operating that service, even though they don’t have deep knowledge of how the service actually works, and it is so reliable that the team that now owns it doesn’t accumulate operational experience with it. I would wager that every tech company of a certain size has seen this pattern. I’ve also frequently heard discussion of bus factor, which is an explicit reference to attrition risk.

Still, while referencing attrition isn’t a taboo in an internal incident write-up the way it is in a public incident write-up, you’re still not likely to see the topic discussed there. Internal incident write-ups take a narrow view of system failures, focusing on technical details. I wrote a blog post several years ago titled What’s allowed to count as a cause?, and attrition is an example of an issue that falls squarely in the “not allowed to count” category.

Now, you might say, “Lorin, this is exactly why five whys is good, so we can zoom out to identify systemic issues.” My response would be, “attrition is never going to be the sole reason for a failure in a complex system, and identifying only attrition as a factor is just as bad as identifying a different factor and neglecting attrition, because you’re missing so much.” I think of the role of attrition as a contributor to incidents the way that smoking is a contributor to lung cancer, or that climate change is a contributor to severe weather events. It isn’t possible to attribute a particular incidence of lung cancer to smoking, or a particular severe storm to climate changes: smoking is neither necessary nor sufficient for lung cancer, and climate change is neither necessary nor sufficient for a particular storm to be severe. But as with attrition, smoking and climate changes are factors that increase risk. If you use a root cause analysis approach to understanding incidents, you’ll miss the role of contributing factors like attrition.

I would go so far to say that organizational factors play a role in every major incident, where attrition is just one example of an organizational factor. The fact that these don’t appear in the write-up says more about the questions that people didn’t ask than it does about the nature of the incident.

Quick thoughts on the recent AWS outage

AWS recently posted a public write-up of the us-east-1 incident that hit them this past Monday. Here are a couple of quick thoughts on it.

Reliability → Automation → Complexity → New failure modes

Our industry addresses reliability problems by adding automation so that the system can handle faults automatically. But here’s the thing: adding this sort of automation increases the complexity in the system. This increase in complexity due to more sophisticated automation brings two costs along with it. One cost is that the behavior of the system becomes more difficult to reason about. This is the “what is it currently doing, and why is it doing that?” problem that we operators face. The second cost of the increased complexity is that, while this automation eliminates a known class of failure modes, it simultaneously introduces a new class of failure modes. These new failure modes occur much less frequently than the class of failure modes that were eliminated, but when they do occur, they are potentially much more severe.

According to Amazon’s write-up, the triggering event was the unintentional deletion of DNS records related to the DynamoDB service due to a race condition. Even though DNS records were fully restored by 2:25 AM PDT, it wasn’t until 3:01 PM, over twelve and a half hours later, that Amazon declared that all AWS services had been fully restored.

There were multiple issues that complicated the restoration of different AWS services, but the one I want to call out here involved the Network Load Balancer (NLB) service. Delays in the propagation of network state information led to false health check failures: there were EC2 instances that were healthy, but that the NLB categorized as unhealthy because of the network state issue. From the report:

During the event the NLB health checking subsystem began to experience increased health check failures. This was caused by the health checking subsystem bringing new EC2 instances into service while the network state for those instances had not yet fully propagated. This meant that in some cases health checks would fail even though the underlying NLB node and backend targets were healthy. This resulted in health checks alternating between failing and healthy. This caused NLB nodes and backend targets to be removed from DNS, only to be returned to service when the next health check succeeded.

This pathological health check behavior led to availability zone DNS failovers, which reduced capacity and led to connection errors.

The alternating health check results increased the load on the health check subsystem, causing it to degrade, resulting in delays in health checks and triggering automatic AZ DNS failover to occur. For multi-AZ load balancers, this resulted in capacity being taken out of service. In this case, an application experienced increased connection errors if the remaining healthy capacity was insufficient to carry the application load.

Health checks are a classic example of an automation system that is designed to improve reliability. It’s not uncommon for an instance to go unhealthy for some reason, and being able to automatically detect when that happens and take the instance out of the load balancer means that your system can automatically handle failures in individual instances. But, as we see in this case, the presence of this reliability-improving automation made a particular problem (delay in network propagation state) even worse.

As a result of this incident, Amazon is going to change the behavior of the NLB logic in the case of health check failures.

For NLB, we are adding a velocity control mechanism to limit the capacity a single NLB can remove when health check failures cause AZ failover.

Note that this is yet another increase in automation complexity with the goal of improving reliability! That doesn’t mean that this is a bad corrective action, or that health checks are bad. Instead, my point here is that adding automation complexity to improve reliability always involves a trade-off. It’s very easy to forget about that trade-off if you focus only on the existing reliability problem you’re trying to tackle, and not even consider what new reliability problems you are introducing. Even if those new problems are rare, they can be extremely painful, as AWS can attest to.

I’ve written previously about failures due to reliability-improving automation. The other examples from my linked post are also from AWS incidents, but this phenomenon is in no way specific to AWS.

Surprise should not be surprising

Since this situation had no established operational recovery procedure, engineers took care in attempting to resolve the issue with [the DropletWorkflow Manager] without causing further issues.

The Amazon engineers didn’t have a runbook to handle this failure scenario, which meant that they had to improvise a recovery strategy during incident response. This is a recurring theme in large-scale incidents: they involve failures that nobody had previously anticipated. The only thing we can really predict about future high-severity incidents is that they are going to surprise us. We are going to keep encountering failure modes we never anticipated, over and over again.

It’s tempting to focus your reliability engineering resources on reducing the risk of known failure modes. But if you only prepare for the failure scenarios that you can think of, then you aren’t putting yourself in a better position to deal with the inevitable situation that you never imagined would ever happen. And the fact that you’re investing in reliability-improving-but-complexity-increasing automation means that you are planting the seeds of those future surprising failure modes.

This means that if you want to improve reliability, you need to invest in both the complexity-increasing reliability automation (robustness), and also in the capacity to be able to better deal with future surprises (resilience). The resilience engineering researcher David Woods uses the term net adaptive value to describe the ability of a system to deal with both predicted failure modes, and to adapt to effectively unpredicted failure modes.

Part of investing in resilience means building human-controllable leverage points so that engineers have a broad range of mitigation actions available to them during future incidents. That could mean having additional capacity on hand that you can throw at the problem, as well as having built in various knobs and switches. As an example from this AWS incident, part of the engineers’ response was to manually disable the health check behavior.

At 9:36 AM, engineers disabled automatic health check failovers for NLB, allowing all available healthy NLB nodes and backend targets to be brought back into service. This resolved the increased connection errors to affected load balancers.

But having these sorts of knobs available isn’t enough. You need your responders to have the operational expertise necessary to know when to use it. More generally, if you want to get better at dealing with unforeseen failure mode, you need to invest in improving operational expertise, so that your incident responders are best positioned to make sense of the system behavior when faced with a completely novel situation.

The AWS write-up focuses on the robustness improvements, the work they are going to do to be better prepared to prevent a similar failure mode from happening in the future. But I can confidently predict that the next large-scale AWS outage is going to look very different from this one (although it will probably involve us-east-1). It’s not clear to me from the write-up that Amazon has learned the lesson of how it important is to prepare to be surprised.

Caveat promptor

In the wake of a major incident, you’ll occasionally hear a leader admonish the engineering organization that we need to be more careful in the future in order to prevent such incidents from happening in the future. Ultimately, these sorts of admonishments don’t help improve reliability, because they miss an essential truth about the nature of work in organizations.

One of the big ideas from resilience engineering is the efficiency-thoroughness trade-off, also known as the ETTO Principle. The ETTO principle was first articulated by Erik Hollnagel, one of the founders of the field. The idea is that there’s a fundamental trade-off between how quickly we can complete tasks, and how thorough we can be when working on each individual task. Let’s consider the work of doing software development using AI agents through the lens of the ETTO principle.

Coding agents like Claude Code and OpenAI are capable of automatically generating significant amounts of code. Honestly, it’s astonishing what these tools are capable of today. But like all LLMs, while they will always generate plausiblelooking output, they do not always generate correct output. This means that a human needs to check an AI agent’s work to ensure that it’s generating code that’s up to snuff: a human has to review the code generated by the agent.

Screenshot of asking Claude about coding mistakes. Note the permanent warning at the bottom.

As any human software engineer will tell you, reviewing code is hard. It takes effort to understand code that you didn’t write. And larger changes are harder to review, which means that the more work that the agent does, the more work the human in the loop has to do to verify it.

If the code compiles and runs and all tests pass, how much time should the human spend on reviewing it? The ETTO principle tells us there’s a trade-off here: the incentives push software engineers towards completing our development tasks more quickly, which is why we’re all adopting AI in the first place. After all, if it ends up taking just as long to review the AI-generated code as it would have for the human reviewer to write it from scratch, then that defeats the purpose of automating the development task to begin with.

Maybe at first we’re skeptical and we spend more time reviewing the agent code. But, as we get better at working with the agents, and as the AI models themselves get better over time, we’ll figure out where the trouble spots of AI-generated code tend to pop up, and we’ll focus our code review effort accordingly. In essence, we’re riding the ETTO trade-off curve by figuring out how much review effort we should be putting in to and where that effort should go.

Eventually, though, a problem with AI-generated code will slip through this human review process and will contribute to an incident. In the wake of this incident, the software engineers will be reminded that AI agents can make mistakes, and that they need to carefully review the generated code. But, as always, such reminders will do nothing to improve reliability. Because, while AI agents change way that software developers work, they don’t eliminate the efficiency-thoroughness trade-off.

The illegible nature of software development talent

Here’s another blog post on gathering some common threads from reading recent posts. Today’s topic is about the unassuming nature of talented software engineers.

The first thread was a tweet by Mitchell Hashimoto about how his best former colleagues are ones where you would have no signal about their skills based on their online activities or their working hours.

The second thread was a blog post written a week later by Nikunj Kothari titled The Quiet Ones: Working within the seams. In this post, Kothari wasn’t writing about a specific engineer per se, but rather a type of engineer, one whose contributions aren’t captured by the organization’s performance rubric (emphasis mine):

They don’t hit your L5 requirements because they’re doing L3 and L7 work simultaneously. Fixing the deploy pipeline while mentoring juniors. Answering customer emails while rebuilding core systems. They can’t be ranked because they do what nobody thought to measure.

The third thread was a LinkedIn post written yesterday by Gergly Orosz (emphasis mine).

One of the best staff-level engineers I worked with is on the market.

What you need to know about this person: every team he’s ever worked on, he did standout work, in every situation. He got stuff done with high quality, helped others, is not argumentative but is firm in holding up common sense and practicality, and is very curious and humble to top all of this off.

And still, from the outside, this engineer is near completely invisible.

He has no social media footprint. His LinkedIn lists his companies he worked at, and nothing else: no technologies, no projects, nothing. His GitHub is empty for the last 5 years, and has perhaps a dozen commits throughout the last 10.

That reason that Mitchell Hashimoto, NIkunj Kothari, and Gergly Orosz were able to identify these talented colleagues as because they worked directly with them. People making hiring decisions don’t have that luxury. For promotions, there are organizational constraints that push organizations to define a formal process with explicit criteria.

For both hiring and promotion, decision-makers have a legibility problem. This problem will inevitability lead to a focus on details that are easier to observe directly precisely because they are easier to observe directly. This is how fields like graphology and phrenology come about. But just because we can directly observe someone’s handwriting or the shapes of the bumps on their head doesn’t mean that those are effective techniques for learning something about that person’s personality.

I think it’s unlikely the industry will get much better at identifying and evaluating candidates anytime soon. And so I’m sure we’ll continue to see posts about the importance of your LinkedIn profile, or your GitHub, or your passion project. But you neglect at your peril the engineers who are working nine-to-five days at boring companies.

Two thought experiments

Here’s a thought experiment that John Allspaw related to me, in paraphrased form (John tells me that he will eventually capture this in a blog post of his own, at which time I’ll put a proper link).

Consider a small-ish tech company that has four engineering teams (A,B,C,D), where an engineer from Team A was involved in an incident (In John’s telling, the incident involves the Norway problem). In the wake of this incident, a post-incident write-up is completed, and the write-up does a good job of describing what happened. Next, imagine that the write-up is made available to teams A,B, and C, but not to team D. Nobody on team D is allowed to read the write-up, and nobody from the other teams is permitted to speak to team D about the details of the incident. The question is: are the members of team D at a disadvantage compared to the other teams?

The point of this scenario is to convey the intuition that, even though team D wasn’t involved in the incident, its members can still learn something from its details that makes them better engineers.

Switching gears for a moment, let’s talk about the new tools that are emerging under the label AI SRE. We’re now starting to see more tools that leverage LLMs to try to automate incident diagnosis and remediation, such as incident.io’s AI SRE product, Datadog’s Bits AI SRE, Resolve.ai (tagline: Your always-on AI SRE), and Cleric (tagline: AI SRE teammate). These tools work by reading in signals from your organization such as alerts, metrics, Slack messages, and source code repositories.

To effectively diagnose what’s happening in your system, you don’t just want to know what’s happening right now, but you also want to have access to historical data, since maybe there was a similar problem that happened, say, a year ago. While LLMs will have been trained with a lot of general knowledge about software systems, it won’t have been trained on the specific details of your system, and your system will fail in system-specific ways, which means that (I assume!) these AI SRE systems will work better if they have access to historical data about your system.

Here’s second thought experiment, this one my own: Imagine that you’ve adopted one of these AI SRE tools, but the only historical data of the system that you can feed your tool is the collection of your company’s post-incident write-ups. What kinds of details would be useful to an AI SRE tool in helping to troubleshoot future incidents? Perhaps we should encourage people to write their incident reports as if they will be consumed by an AI SRE tool that will use it to learn as much as possible about the work involved in diagnosing and remediating incidents in your company. I bet the humans who read it would learn more that way too.

A statistic is as a statistic does

(With apologies to the screenwriters of Forrest Gump)

I’m going to use this post to pull together some related threads from different sources I’ve been reading lately.

Rationalization as discarding information

The first thread is from The Control Revolution by the late American historian and sociologist James Beniger, which was published back in the 1980s: I discovered this book because it was referenced in Neil Postman’s Technopoly.

Beniger references Max Weber’s concept of rationalization, which I had never heard of before. I’m used to the term “rationalization” as a pejorative term meaning something like “convincing yourself that your emotionally preferred option is the most rational option”, but that’s not how Weber meant it. Here’s Beniger, emphasis mine (from p15):

Although [rationalization] has a variety of meanings … most definitions are subsumed by one essential idea: control can be increased not only by increasing the capacity to process information but also by decreasing the amount of information to be processed.

In short, rationalization might be defined as the destruction or ignoring of information in order to facilitate its processing.

This idea of rationalization feels very close to James Scott’s idea of legibility, where organizations depend on simplified models of the system in order to manage it.

Decision making: humans versus statistical models

The second thread is from Benjamin Recht, a professor of computer science at UC Berkeley who does research in machine learning. Recht wrote a blog post recently called The Actuary’s Final Word about the performance of algorithms versus human experts on performing tasks such as medical diagnosis. The late American psychology professor Paul Meehl argued back in the 1950s that the research literature showed that statistical models outperformed human doctors when it came to diagnosing medical conditions. Meehl’s work even inspired the psychologist Daniel Kahneman, who famously studied heuristics and biases.

In his post, Recht asks, “what gives?” If we have known since the 1950s that statistical models do better than human experts, why do we still rely on human experts? Recht’s answer is that Meehl is cheating: he’s framing diagnostic problems as statistical ones.

Meehl’s argument is a trick. He builds a rigorous theory scaffolding to define a decision problem, but this deceptively makes the problem one where the actuarial tables will always be better. He first insists the decision problem be explicitly machine-legible. It must have a small number of precisely defined actions or outcomes. The actuarial method must be able to process the same data as the clinician. This narrows down the set of problems to those that are computable. We box people into working in the world of machines.

This trick fixes the game: if all that matters is statistical outcomes, then you’d better make decisions using statistical methods.

Once you frame a problem as being statistical in nature, than a statistical solution will be the optimal one, by definition. But, Recht argues, it’s not obvious that we should be using the average of the machine-legible outcomes in order to do our evaluation. As Recht puts it:

How we evaluate decisions determines which methods are best. That we should be trying to maximize the mean value of some clunky, quantized, performance indicator is not normatively determined. We don’t have to evaluate individual decisions by crude artificial averages. But if we do, the actuary will indeed, as Meehl dourly insists, have the final word.

Statistical averages and safe self-driving cars

I had Recht’s post in mind when Reading Philip Koopman’s new book Embodied AI Safety. Koopman is Professor Emeritus of Electrical Engineering at Carnegie-Mellon University, he’s a safety researcher that specializes in automotive safety. (I first learned about him from his work on the Toyota unintended acceleration cases from about ten years ago).

I’ve just started his book, but these lines from the preface jumped out at me (emphasis mine):

In this book, I consider what happens once you … come to realize there is a lot more to safety than low enough statistical rates of harm.

[W]e have seen numerous incidents and even some loss events take place that illustrate “safer than human” as a statistical average does not provide everything that stakeholders will expect from an acceptably safe system. From blocking firetrucks, to a robotaxi tragically “forgetting” that it had just run over a pedestrian, to rashes of problems at emergency response scenes, real-world incidents have illustrated that a claim of significantly fewer crashes than human drivers does not put the safety question to rest.

More numbers than you can count

I’m also reading The Annotated Turing by Charles Petzold. I had tried to read Alan Turing’s original paper where he introduced the Turing machine, but found it difficult to understand, and Petzold provides a guided tour through the paper, which is exactly what I was looking for.

I’m currently in Chapter 2, where Petzold discusses the German mathematician Georg Cantor’s famous result that the real numbers are not countable, that the size of the set of real numbers is larger than the size of the set of natural numbers. (In particular, it’s the transcendental numbers like π and e that aren’t countable: we can actually count what are called the algebraic real numbers, like √2).

To tie this back to the original thread: rationalization feels like to me like the process of focusing on only the algebraic numbers (which include the integers and rational numbers), even though most of the real numbers are transcendental.

Ignoring the messy stuff is tempting because it makes analyzing what’s left much easier. But we can’t forget that our end goal isn’t to simplify analysis, it’s to achieve insight. And that’s exactly why you don’t want to throw away the messy stuff.

Fixation: the ever-present risk during incident handling

Recent U.S. headlines have been dominated by school shootings. The bulk of the stories have been about the assassination of Charlie Kirk on the campus of Utah Valley University and the corresponding political fallout. On the same day, there was also a shooting at Evergreen High School in Colorado, where a student shot and injured two of his peers. This post isn’t about those school shootings, but rather, one that happened three years ago. On May 24, 2022, at Robb Elementary School in Uvalde, Texas, 19 students and 2 teachers were killed by a shooter who managed to make his way onto the campus.

Law enforcement were excoriated for how they responded to the Uvalde shooting incident: several were fired, and two were indicted on charges of child endangerment. On January 18, 2024, the Department of Justice released the report on their investigation of the shooting:  Critical Incident Review: Active Shooter at Robb Elementary School. According to the report, there were multiple things that went wrong during the incident. Most significantly, the police originally believed that the shooter had barricaded himself in an empty classroom, where in fact shooter was in a classroom with students. There were also communication issues that resulted in a common ground breakdown during the response. But what I want to talk about in this post is the keys.

The search for the keys

During the response to the Uvalde shooting, there was significant effort by the police on the scene to locate master keys to unlock rooms 111/112 (numbered p14, PDF p48, emphasis mine).

Phase III of the timeline begins at 12:22 p.m., immediately following four shots fired inside classrooms 111 and 112, and continues through the entry and ensuing gunfight at 12:49 p.m. During this time frame, officers on the north side of the hallway approach the classroom doors and stop short, presuming the doors are locked and that master keys are necessary.

The search for keys started before this, because room 109 was locked, and had children in it, and the police wanted to evacuate those children (numbered p 13, PDF p48):

By approximately 12:09 p.m., all classrooms in the hallways have been evacuated and/or cleared except rooms 111/112, where the subject is, and room 109. Room 109 is found to be locked and believed to have children inside.

If you look at the Minute-by-Minute timeline section of the report (numbered p17, PDF p50) you’ll see the text “Events: Search for Keys” appear starting at 12:12 PM, all of the way until 12:45 PM.

The irony here is that the door to room 111/112 may have never been locked to begin with, as suggested by the following quote (numbered p15, PDF p48), emphasis mine:

At around 12:48 p.m., the entry team enters the room. Though the entry team puts the key in the door, turns the key, and opens it, pulling the door toward them, the [Critical Incident Review] Team concludes that the door is likely already unlocked, as the shooter gained entry through the door and it is unlikely that he locked it thereafter.

Ultimately, the report explicitly calls out how the search for the keys led to delays in response (numbered p xxviii, PDF p30):

Law enforcement arriving on scene searched for keys to open interior doors for more than 40 minutes. This was partly the cause of the significant delay in entering to eliminate the threat and stop the killing and dying inside classrooms 111 and 112. (Observation 10)

Fixation

In hindsight, we can see that the responders got something very important wrong in the moment: they were searching for keys for a door that probably wasn’t even locked. In this specific case, there appears to have been some communicated-related confusion about the status of the door, as shown by the following (numbered p53, PDF p86):

The BORTAC [U.S. Border Patrol Tactical Unit] commander is on the phone, while simultaneously asking officers in the hallway about the status of the door to classrooms 111/112. UPD Sgt. 2 responds that they do not know if the door is locked. The BORTAC commander seems to hear that the door is locked, as they say on the phone, “They’re saying the door is locked.” UPD Sgt. 2 repeats that they do not know the status of the door.

More generally, this sort of problem is always going to happen during incidents: we are forever going to come to conclusions during an incident about what’s happening that turn out to be wrong in hindsight. We simply can’t avoid that, no matter how hard we try.

The problem I want to focus on here is not the unavoidable getting it wrong in the moment, but the actually-preventable problem of fixation. We “fixate” when we focus solely on one specific aspect of the situation. The problem here is not searching for keys, but on searching for keys to the exclusion of other activities.

During complex incidents, the underlying problem is frequently not well understood, and so the success of a proposed mitigation strategy is almost never guaranteed. Maybe a rollback will fix things, but maybe it won’t! The way to overcome this problem is to pursue multiple strategies in parallel. One person or group focuses on rolling back a deployment that aligns in time, another looks for other types of changes that occurred around the same time, yet another investigates the logs, another looks into scaling up the amount of memory, someone else investigates traffic pattern changes, and so on. By pursuing multiple diagnostic and mitigation strategies in parallel, we reduce the risk of delaying the mitigation of the incident by blocking on the investigation of one avenue that may turn out to not be fruitful.

Doing this well requires diversity of perspectives and effective coordination. You’re more likely to come up with a broader set of options to pursue if your responders have a broader range of experiences. And the more avenues that you pursue, the more the coordination overhead increases, as you now need to keep the responders up to date about what’s going on in the different threads without overwhelming them with details.

Fixation is a pernicious risk because we’re more likely to fixate when we’re under stress. Since incidents are stressful by nature, they are effectively incubators of fixation. In the heat of the moment, it’s hard to take a breath, step back for a moment, understand what’s been tried already, and calmly ask about what the different possible options are. But the alternative is to tumble down the rabbit hole, searching for keys to a door that is already unlocked.

The hidden trade-offs of fine-grained progressive rollouts

A progressive rollout refers to the act of rolling out some new functionality gradually rather than all at once. This means that, when you initially deploy it, the change only impacts a fraction of your users. The idea behind a progressive rollout is to reduce the risk of a deployment by reducing the blast radius: if something goes wrong with the new thing during deployment, then the impact is much smaller than if you had deployed it all-at-once, to all of the traffic.

The impact of a bad rollout is shown in red

There are two general strategies for doing a progressive rollout. One strategy is coarse grained, where you stage your deploys across domains. For example, deploying the new functionality to one geographic region at a time. The second strategy is more fine-grained, where you define a ramp up schedule (e.g., 1% of traffic to the new thing, then 5%, then 10%, etc.).

Note that the two strategies aren’t mutually exclusive: you can stage your deploy across regions, and within each region, you can do a fine-grained ramp-up within each regions. And you can also think of it as a spectrum rather than two separate categories, since you can control the granularity. But I make the distinction here because I want to talk specifically about the fine-grained approach, where we use a ramp.

The ramp is clearly superior if you’re able to detect a problem during deployment, as shown in the diagram above. It’s a real win if you have automation that can automatically detect based on a metric like error rate. The problem with the ramp is the scenario when you don’t detect that there’s a problem with the deployment.

My claim here in this post is that if you don’t detect a problem with a fine-grained progressive rollout until after the rollout has completed, then it will tend to take you longer to diagnose what the problem is:

Paradoxically, progressive rollout can increase the blast radius by making after-the-fact diagnosis harder

Here’s my argument: once you know something is wrong with your system, but you don’t know what it is that has gone wrong, one of the things you’ll do is to look at dashboard graphs to look for a signal that identifies when the problem started, such as an increase in error rate or request latency. When you do a fine-grained progressive rollout, if something has gone wrong, then the impact will get smeared out over time, and it will be harder to identify the rollout as the relevant change by looking at a dashboard. If you’re lucky, your observability tools will let you slice on the rollout dimension. This is why I like coarse-grained rollouts, because if you have explicit deployment domains like geographical regions, then your observability tools will almost certainly let you slice the data based on those. Heck, you should have existing dashboards that already slice on it. But for fine-grained rolled-out, you may not think to slice on a particular rollout dimension (especially if you’re rolling out a bunch of things at once, all of them doing fine-grained deployments), and you might not even be able to.

To determine whether fine-grained rollouts are a net win depends on a number of factors whose values are not obvious, including:

  • the probability you detect a problem during the rollout vs after the rollout
  • how much longer it takes to diagnose the problem if not caught during rollout
  • your cost model for an incident

On the third bullet: the above diagram implicitly assumes that impact to the business is linear with respect to time. However, it might be non-linear: an hour-long incident may turn out to be more than twice as expensive as two half-hour-long incidents.

As someone who works in the reliability space, I’m acutely aware of the pain of incidents that take a long time to mitigate because they are difficult to diagnose. But I think that the trade-off of fine-grained progressive rollouts are generally not recognized as such: it’s easy to imagine the benefits when the problems are caught earlier, it’s harder to imagine the scenarios where the problem isn’t caught until later, and how harder things get because of it.

Nothing fails like a history of success

The Axiom of Experience: the future will be like the past, because, in the past, the future was like the past. – Gerald M. Weinberg, An Introduction to General Systems Thinking

Last Friday, the San Francisco Bay Area Rapid Transit system (known as BART) experienced a multiple hour outage. Later that day, the BART Deputy General Manager released a memo about the outage with some technical details. The memo is brief, but I was honestly surprised to see this amount of detail in a public document that was released so quickly after an incident, especially from a public agency. What I want to focus on in this post is this line (emphasis mine):

Specifically, network engineers were performing a cutover to a new network switch at
Montgomery St. Station… The team had already successfully performed eight similar cutovers earlier this year.

This reminded me of something I read in the Buildkite writeup from an incident that happened back in January of this year (emphasis mine):

Given the confidence gained by initial load testing and the migrations already performed over the past year, we wanted to allow customers to take advantage of their seasonal low periods to perform shard migrations, as a win-win. This caused us to discount the risk of performing migrations during a seasonal low period and what impacts might emerge when regular peak traffic returned.

It also reminded me about the 2022 Rogers Telecommunications outage in Canada (emphasis mine, [redacted] comments in the original):

Rogers had assessed the risk for the initial change of this seven-phased process as “High”. Subsequent changes in the series were listed as “Medium.” [redacted] was “Low” risk based on the Rogers algorithm that weighs prior success into the risk assessment value. Thus, the risk value for [redacted] was reduced to “Low” based on successful completion of prior changes.

Whenever we make any sort of operational change, we have a mental model of the risk associated with the change. We view novel changes (I’ve never done something like this before!) as riskier than changes we’ve performed successfully multiple times in the past (I’ve done this plenty of times). I don’t think this sort of thinking is a fallacy: rather, it’s a heuristic, and it’s generally a pretty effective one! But, like all heuristics, it isn’t perfect. As shown in the examples above, the application of this heuristic can result in a miscalibrated mental model of the risk associated with a change.

So, what’s the broader lesson? In practice, our risk models (implicit or otherwise) are always miscalibrated: a history of past successes is just one of multiple avenues that can lead us astray. Trying to achieve a perfect risk model is like trying to deploy software that is guaranteed to have zero bugs: it’s never going to happen. Instead, we need to accept the reality that, like our code, our models of risk will always have defects that are hidden from us until it’s too late. So we’d better get damned good at recovery.