There is no escape from Ashby’s Law

[V]ariety can destroy variety

W. Ross Ashby

There are more things in heaven and earth, Horatio,
Than are dreamt of in your philosophy.

Hamlet (1.5.167-8)

In his book An Introduction to Cybernetics, published in 1956, the English psychiatrist W. Ross Ashby proposed the Law of Requisite Variety. His original formulation isn’t easy to extract into a blog post, but the Principia Cybernetica website has a pretty good definition:

The larger the variety of actions available to a control system, the larger the variety of perturbations it is able to compensate.

Like many concepts in systems thinking, the Law of Requisite Variety is quite abstract, which makes it hard to get a handle on. Here’s a concrete example I find useful for thinking about it.

Imagine you’re trying to balance a broomstick on your hand:

This is an inherently unstable system, and so you have to keep moving your hand around to keep the broomstick balanced, but you can do it. You’re acting as a control system to keep the broomstick up.

If you constrain the broomstick to have only one degree of freedom, you have what’s called the inverted pendulum problem, which is a classic control systems problem. Here’s a diagram:

From the Wikipedia Inverted pendulum article

The goal is to move the cart in order to keep the pendulum balanced. If you have sensor information that measures the tilt angle, θ, you can use that data to build a control system to push on the cart in order to keep the pendulum from falling over. Information about the tilt angle is part of the model that the control system has about the physical system it’s trying to control.

Now, imagine that the pendulum isn’t constrained to only one degree of freedom, but it now has two degrees of freedom: this is the situation when you’re balancing a broom on your hand. There are now two tilt angles to worry about: it can fall towards/away from your body, or it can fall left/right.

You can’t use the original inverted pendulum control system to solve this problem, because that only models one of the tilt angles. Imagine you can only move your hand forward and back, but not left or right. Because of this, the control system won’t be able to correct for the other angle: the pendulum will fall over.

The problem is that the new system can vary in ways that the control system wasn’t designed to handle: it can get into states that aren’t modeled by the original system.

This is what the Law of Requisite Variety is about: if you want to build a control system, the control system needs to be able to model every possible state that the system being controlled can get into: the state space of the control system has to be at least as large as the state space of the physical system. If it isn’t, then the physical system can get into states that the control system won’t be able to deal with.

Bringing this into the software world: when we build infrastructure software, we’re invariably building control systems. These control systems can only handle situations that it is designed for. We invariably run into trouble when the systems we build get into states that the designer never imagined happening. A fun example of this case is some pathological traffic pattern.

The fundamental problem with building software control systems is that we humans aren’t capable of imagining all possible states that the systems being controlled can get into. In particular, we can’t imagine the changes that people are going to make in the future that will create new states that we simply could not ever imagine needing to handle. And so, our control systems will invariably be inadequate, because they won’t be able to handle these situations. The variety of the world exceeds the variety our control systems are designed to handle.

Fortunately, we humans are capable of conceiving of a much wider variety of system states than the systems we build. That’s why, when our software-based control systems fail and the humans get paged in, the humans are eventually able to make sense of what state the system has gotten itself into and put things right.

Even we humans are not exempt from Ashby’s Law. But we can revise of our (mental) models of the system in ways that our software-based control systems cannot, and that’s why we can deal effectively with incidents. Because of how we can update our models, we can adapt where software cannot.

The downsides of expertise

I’m a strong advocate of the value of expertise to a software organization. I’d even go so far as to say that expertise is a panacea.

Despite the value of expertise, there are two significant obstacles to organizations to leverage expertise as effectively as possible.

Expertise is expensive to acquire

Developing expertise is expensive for an organization to acquire. Becoming an expert requires experience, which takes time and effort. An organization can hire for some forms of expertise, but no organization can hire someone who is already an expert in the org’s socio-technical system. And a lot of the value for an organization is having expertise in the behaviors of the local system.

You can transfer expertise from one person to another, but that also takes time and effort, and you need to put mechanisms in place to support this. Apprenticeship and coaching are two traditional methods of expertise transfer, but also aren’t typically present in software organizations. I’m an advocate of learning from incidents as a medium for skill transfer, but that requires its own expertise for doing incident investigation in a way that supports skill transfer.

Alas, you can’t transfer expertise from a person to a tool, as John Allspaw notes, so we can’t take a shortcut by acquiring sophisticated tooling. AI researchers tried building such expert systems in the 1980s, but these efforts failed.

Concentrated expertise is dangerous

Organizations tend to foster local experts: a small number of individuals who have a lot of expertise with aspects of the local system. These people are enormously valuable to organizations (they’re often very helpful during incidents), but they represent single points of failure. If these individuals happen to be out of the office during a critical incident, or if they leave the company, it can be very costly to the organization. My former colleague Nora Jones calls this the islands of knowledge problem.

What’s worse, high concentration of expertise can become a positive feedback loop. If there’s a local expert, then other individuals may use the expert as a crutch, relying on the expert to solve the harder problems and never putting in the effort to develop their own expertise.

To avoid this problem, we need to develop the expertise in more people within the organization, which, is as mentioned earlier, is expensive.

I continue to believe that it’s worth it.

Getting into people’s heads: how and why to fake it

With apologies to David Parnas and Paul Clements.

To truly understand how an incident unfolded, you need to experience the incident from the perspectives of the people who were directly involved in it: to see what they saw, think what they thought, and feel what they felt. Only then can you understand how they came to their conclusions and made their decisions.

The problem is that we can’t ever do that. We simply don’t have direct access to the minds of the people who were involved. We can try to get at some of this information: we can interview them as soon as possible after the incident and ask the kinds of questions that are most likely to elicit information about what they remember seeing, thinking, or feeling. But this account will always be inadequate: memories are fallible, interviewing time is finite, and we’ll never end up asking all of the right questions, anyways.

Even though we can’t really capture the first-hand experiences of the people involved in the incident, I still think it’s a good idea to write the narrative as if we are able to do so. When I’m writing the narrative description, I try to write each section from the perspective of one person that was directly involved, describing things from that person’s point of view, rather than taking an omniscient third-person perspective.

The information in these first-hand accounts is based on my interviews with the people involved, and they review them for accuracy, so it isn’t a complete fiction, but neither is it ever really the truth of what happened in the moment, because that information is forever inaccessible.

Instead, the value of this sort of first-hand narrative account is to force the reader to experience the incident from the perspectives of individuals involved. The only way to make sense of an incident is to try to understand the world as seen from the local perspectives of the individuals involved. Writing it up this way encourages the reader to see things this way. It’s a small lie that serves a greater truth.

Conveying confusion without confusing the reader

Confusion is a hallmark of a complex incident. In the moment, we know something is wrong, but we struggle to make sense of the different signals that we’re seeing. We don’t understand the underlying failure mode.

After the incident is over and the engineers have had a chance to dig into what happened, these confusing signals make sense in retrospect. We find out that about the bug or inadvertent config change or unexpected data corruption that led to the symptoms we saw during the incident.

When writing up the narrative, the incident investigator must choose whether to inform the reader in advance about the details of the failure mode, or to withhold this info until the point in time in the narrative when the engineers involved understood what was happening.

I prefer the first approach: giving the reader information about the failure mode details in the narrative before the actors involved in the incident have that information. This enables the reader to make sense of the strange, anomalous signals in a way that the engineers in the moment were not able to.

I do this because, as a reader, I don’t enjoy the feeling of being confused: I’m not looking for a mystery when I read a writeup. If I’m reading about a series of confusing signals that engineers are looking at (e.g., traffic spikes, RPC errors), and I can’t make sense of them either, I tend to get bored. It’s just a mess of confusion.

On the other hand, if I know why these signals are happening, but the characters in the story don’t know, then that is more effective in creating tension in my mind. I want to read on to resolve the tension, to figure out how the engineers ended up diagnosing the problem.

When informing the reader about the failure mode in advance, the challenge is to avoid infecting the reader with hindsight bias. If the reader thinks, “the problem was obviously X. How could they not see it?”, then I’ve failed in the writeup. What I try to do is put the reader into the head of the people involved as much as possible: to try to convey the confusion they were experiencing in the moment, and the source of that confusion.

By enabling the reader to identify with the people involved, you can communicate to the reader how confusing the situation was to the people involved, without directly inflicting that same confusion upon them.

Climbing the mountain

When I was in high school, I attended a Jewish weekend retreat in the Laurentian Mountains of Quebec1. While most of the attendees were secular Jews like me, one of them was a Chabadnik, and several us got into a discussion about Judaism and scholarship.

One of the secular Jews lamented that it was an insurmountable task to properly understand Judaism: there were just too many texts you had to study. If we were lucky, we knew a little Hebrew, but certainly not enough to study the Hebrew texts (let alone the texts in other languages!).

The Chabadnik offered the following metaphor. Imagine a mountain, with an impossibly high peak. Studying Judaism is like climbing the mountain. People who have previously studied material will be higher up on the mountain than those who haven’t studied as much. However, regardless of your current elevation, you can always climb higher than where you are, by studying material appropriate for your level.

So it is with learning more about resilience engineering. Fortunately for those who seek to learn more about resilience, it’s a much younger field than Judaism. You need contend with only decades of scholarship, rather than centuries. Still, being confronted with decades of research papers can be intimidating. But don’t let that stop you from trying to learn just a little bit more than you currently know.

I once heard Richard Cook say that the most effective way to get better at analyzing incidents was to first study how incidents happen in a field other than your own. Most of us will never have the opportunity to devote years of study to a different field! On the other hand, he also said that having a ten-to-fifteen-minute huddle after an incident to discuss what happened can also be a very effective learning mechanism.

You don’t need to read mountains of papers to start getting better at learning from incidents. It can be as simple as asking different kinds of questions in retrospectives (e.g., “When you saw the alert go off, what did you do next?”). One of the things I really like about resilience engineering is how it values expertise borne out of experience. I think you’ll learn more by trying out different questions to ask in incident retros than you will from reading the papers. (Although reading the papers will eventually help you ask better questions).

Diane Vaughan, a sociology researcher, spent six years studying a single incident! That’s a standard that none of us can hope to meet. And that means we won’t obtain the depth of insight that Vaughan was able to in her investigation, but that’s ok.

Don’t be intimidated by the height of the mountain. Don’t worry about reaching the top (there isn’t one), or even reaching a certain height. The important thing is to ascend: to work to climb higher than you currently are.

1 I attended a Jewish elementary school, but a public high school. In high school, my parents encouraged me to attend these sorts of programs to maintain some semblance of Jewish identity.

Taking a risk versus running a risk

In the wake of an incident, we can often identify a risky action that was taken by an engineer that contributed to the incident. However, actions that look risky to us in retrospect didn’t necessarily look risky to the engineer who took the action in the moment. In the SINTEF A17034 report on Organizational Accidents and Resilient Organisations: Six Perspectives, the authors draw a distinction between taking a risk and running a risk.

When you take a risk, you are taking an action that you know to be risky. When an engineer says they are YOLO’ing a change, they’re taking a risk.

On the other hand, running a risk refers to taking a course of action that is not believed to be risky. These are the kinds of actions that we only categorize as risky in hindsight, when we have more information than the engineer who took the course of action in the moment.

Sometimes we deliberately take a risk because we believe there is greater risk if we don’t take action. But running a risk is never deliberate, because we didn’t know the risk was there in the first place.

Stories as a vehicle for learning from the experience of others

Senior software engineering positions command higher salaries than junior positions. The industry believes (correctly, I think) that engineers become more effective as they accumulate experience, and that perception is reflected in market salaries.

Learning from direct experience is powerful, but there’s a limit to the rate at which we can learn from our own experiences. Certainly, we learn more from some experiences than others; we joke about “ten years of experience” versus “one year of experience ten times over”, as well as using scars as a metaphor for these sometimes unpleasant but more impactful experiences. But there’s only so many hours in a day, and we may not always be…errr… lucky enough to be exposed to many high-value learning opportunities.

There’s another resource we can draw on besides our own direct experience, and that’s the experiences of peers in our organization. Learning from the experiences of others isn’t as effective as learning directly from our own experience. But, if the organization you work in is large enough, then high-value learning opportunities are probably happening around you all of the time.

Given these opportunities abound, the challenge is: how can we learn effectively from the experiences of others? One way that humans learn from others is through telling stories.

Storytelling enables a third person to experience events by proxy. When we tell a story well, we run a simulation of the events in the mind of the listener. This kind of experience is not as effective as the first-hand kind, but it still leaves an impression on the listener when done well. In addition, storytelling scales very well: we can write down stories, or record them, and then publish these across the organization.

A second challenge is: what stories should we tell? It turns out that incidents make great stories. You’ll often hear engineers tell tales of incidents to each other. We sometimes calling these war stories, horror stories (the term I prefer), or ghost stories.

Once we recognize the opportunity of using incidents as a mechanism for second-hand-experiential-learning-through-storytelling, this shifts our thinking about the role and structure of an incident writeup. We want to tell a story that captures the experiences of the people involved in the incident, so that the readers can imagine what is was like, in the moment, when the alerts were going off and confusion reigned.

When we want to use incidents for second-hand experiential learning, it shifts the focus of an incident investigation away from action items as being the primary outcome and towards the narrative, the story we want tell.

When we hire for senior positions, we don’t ask candidates to submit a list of action items for tasks that could improve our system. We believe the value of their experience lies in them being able to solve novel problems in the future. Similarly, I don’t think we should view incident investigations as being primarily about generating action items. If, instead, we view them as an opportunity to learn collectively from the experiences of individuals, then more of us will get better at solving novel problems in the future.

In service of the narrative

The most important part of an operational surprise writeup is the narrative description. That section of the writeup tells the story of how the surprise unfolded over time, taking into account the perspectives of the different people who were involved. If you want your readers to learn about how work is done in your organization, you need to write effective narrative descriptions.

Narrative descriptions need to be engaging. The best ones are vivid and textured: they may be quite long, but they keep people reading until the end. A writeup with a boring narrative has no value, because nobody will read through it.

Writing engaging narrative descriptions is hard. Writing is a skill, and like all skills, the only way to get better is through practice. That being said, there are some strategies that I try to keep in mind to make my narrative descriptions more effective. In this blog post, I cover a few of them.

Goal is learning, not truth or completeness

At a high level, it’s important to keep in mind what you’re trying to achieve with your writeup. I’m interested in maximizing how much the reader will learn from the writeup. That goal should drive decisions you make on what to include and how to word things.

I’m not trying to get at the truth, because the truth is inaccessible. I’ll never know what really happened, and that’s ok, because my goal of learning doesn’t require perfect knowledge of the history of the world.

I’m also not trying to be complete; I don’t try to convey every single bit of data in the narrative that I’ve been able to capture in an investigation. For example, I don’t include every single exchange of a chat conversation in a narrative.

Because of my academic background, this is an instinct I have to fight: academics tend towards being as complete as possible in writing things up. However, including inappropriate level of detail makes the narrative harder to read.

I do include a “raw timeline” section in the appendix with lots of low level events that have been captured (chat transcripts, metrics data, times of when relevant production changes happened). These details don’t all make it into the narrative description, but they’re available if the reader wants to consult them.

Treat the people involved like people

Effective fiction authors create characters that you can empathize with. They convey what the characters see, what they feel, what they have experienced, what motivates them. If a character in a movie or a novel makes a decision that doesn’t seem to make sense to us, we get frustrated. We consider that lousy writing.

In a narrative description, you have to describe actions taken by people. These aren’t fictional characters, they are real people; they are the colleagues that you work alongside every day. However, like the characters in a good piece of fiction, your colleagues also make decisions based on what they see, what they feel, what they have experienced, and what motivates them.

The narrative must answer this question for the reader: How did it make sense for the people involved to come to their conclusions and take their actions? In order for your reader to learn this, you need to convey details such as what they were seeing, what they were thinking, what they knew and what they did not know. You want to try to tell the part of the narrative that describes their actions from their perspective.

One of the challenges is that you won’t have easy access to these details. That’s why an important precursor to doing a writeup is to talk with the people involved to try to get as much information as you can about how the world looked from their eyes as events were unfolding. Doing that well is too big a topic for this post.

Start with some background

I try never to start my narratives with “An alert fired for …”. There’s always a history behind the contributing factors that enabled the surprise. For the purposes of the writeup, that means starting the narrative further back in time, to tell the reader some of the relevant history.

You won’t be able to describe the historical information with the same level of vividness as the unfolding events, because it happened much further back in time, and the tempo of this part of the narrative is different from the part that describes the unfolding events. But that’s ok.

It’s also useful to provide additional context about how the overall system works, to help readers who may not be as familiar with the specific details of the systems involved. For example, you may have to explain what the various services involved actually do. Don’t be shy about adding this detail, since people who already know it will just skim this part. Adding these details also makes these writeups useful for new hires to learn how the system works.

Make explicit how details serve the narrative

If you provide details in your narrative description, it has to be obvious to the reader why you are telling them these details. For example, if you write an alert fired eight hours before the surprise, you need to make it obvious to the reader why this alert is relevant to the narrative. There may be very different reasons, for example:

  • This alert had important information about the nature of the operational surprise. However, it was an email only alert, not a paging one. And it was one of many email alerts that had fired, and those alerts are typically not actionable. It was ignored, just like the other ones.
  • The alert was a paging alert, and the on-call who engaged concluded that it was just noise. In fact, it was noise. However, when the real alert fired eight hours later, the symptom was the same, and the on-call assumed it was another example of noise.
  • The alert was a paging alert. The particular alert was unrelated to the surprise that would happen later, but it woke the on-call up in the middle of the night. They were quite tired the next day, when the surprise happened.

If you just say, “an alert fired earlier” without more detail, the reader doesn’t know why they should care about this detail in the writeup, which makes the writing less engaging. See also: The Law of Conservation of Detail.

Write in the present tense

This is just a stylistic choice of mine, but I find that if I write narratives in the present tense (e.g., “When X looks at the Y dashboard, she notices that signal Z has dropped…”), it reinforces the idea that the narrative is about understanding events as they were unfolding.

Use retrospective knowledge for foreshadowing

Unbeknownst to the princess but knownst to us, danger lurks in the stars above…

Opening crawl from the movie “Spaceballs”

When you are writing up a narrative description, you know a lot more about what happened than the people who were directly involved in the operational surprise as it was happening.

You can use this knowledge to make the writing more compelling through foreshadowing. You know about the consequences of actions that the people in the narrative don’t.

To help prevent the reader falling into the trap of hindsight bias, in your writeup, make it as explicit as possible that the knowledge the reader had is not knowledge that the people involved had. For example:

At 11:39, X takes action Y. What X does not know is that, six months earlier, Z had deployed a change to service Q, which changes what happens when action Y is taken.

This type of foreshadowing is helpful for two reasons:

  • It pushes against hindsight bias by calling out explicitly how it came to be that a person involved had a mental model that deviated from reality.
  • It creates “what happened next?” tension in the reader, encouraging them to read on.

Conclusion

We all love stories. We learn best from our own direct experiences, but storytelling provides an opportunity for us to learn from the experiences of others. Writing effective narratives is a kind of superpower because it gives you the ability to convey enormous amounts of detail to a large number of people. It’s a skill worth developing.

The problem with counterfactuals

Incidents make us feel uncomfortable. They remind us that we don’t have control, that the system can behave in ways that we didn’t expect. When an incident happens, the world doesn’t make sense.

A natural reaction to an incident is an effort to identify how the incident could have been avoided. The term for this type of effort is counterfactual reasoning. It refers to thinking about how, if the people involved had taken different actions, events would have unfolded differently. Here are two examples of counterfactuals:

  • If the engineer who made the code change had written a test for feature X, then the bug would never have made its way into production.
  • If the team members had paid attention to the email alerts that had fired, they would have diagnosed the problem much sooner.

Counterfactual reasoning is comforting because it restores the feeling that the world makes sense. What felt like a surprise is, in fact, perfectly comprehensible. What’s more, it could even have been avoided, if only we had taken the right actions and paid attention to the right signals.

While counterfactual reasoning helps restore our feeling that the world makes sense, the problem with it is that it doesn’t help us get better at avoiding or dealing with future incidents. The reason it doesn’t help is that counterfactual reasoning gives us an excuse to avoid the messy problem of understanding how we missed those obvious-in-retrospect actions and signals in the first place.

It’s one thing to say “they should have written a test for feature X”. It’s another thing to understand the rationale behind the engineer not writing that test. For example:

  • Did they believe that this functionality was already tested in the existing test suite?
  • Were they not aware of the existence of the feature that failed?
  • Were they under time pressure to get the code pushed into production (possibly to mitigate an ongoing issue)?

Similarly, saying “they should have paid closer to attention to the email alerts” means you might miss the fact that the email alert in question isn’t actionable 90% of the time, and so the team has conditioned themselves to ignore it.

To get better at avoiding or mitigating future incidents, you need to understand the conditions that enabled past incidents to occur. Counterfactual reasoning is actively harmful for this, because it circumvents inquiry into those conditions. It replaces “what were the circumstances that led to person X taking action Y” with “person X should have done Z instead of Y”.

Counterfactual reasoning is only useful if you have a time machine and can go back to prevent the incident that just happened. For the rest of us who don’t have time machines, counterfactual reasoning helps us feel better, but it doesn’t make us better at engineering and operating our systems. Instead, it actively prevents us from getting better.

Don’t ask “why didn’t they do Y instead of X?” Instead, ask, “how was it that doing X made sense to them at the time?” You’ll learn a lot more about the world if you ask questions about what did happen instead of focusing on what didn’t.