Missing the forest for the trees: the component substitution fallacy

Here’s a brief excerpt from a talk by David Woods on what he calls the component substitution fallacy (emphasis mine):

Everybody is continuing to commit the component substitution fallacy.

Now, remember, everything has finite resources, and you have to make trade-offs. You’re under resource pressure, you’re under profitability pressure, you’re under schedule pressure. Those are real, they never go to zero.

So, as you develop things, you make trade offs, you prioritize some things over other things. What that means is that when a problem happens, it will reveal component or subsystem weaknesses. The trade offs and assumptions and resource decisions you made guarantee there are component weaknesses. We can’t afford to perfect all components.

Yes, improving them is great and that can be a lesson afterwards, but if you substitute component weaknesses for the systems-level understanding of what was driving the event … at a more fundamental level of understanding, you’re missing the real lessons.

Seeing component weaknesses is a nice way to block seeing the system properties, especially because this justifies a minimal response and avoids any struggle that systemic changes require.

Woods on Shock and Resilience (25:04 mark)

Whenever an incident happens, we’re always able to point to different components in our system and say “there was the problem!” There was a microservice that didn’t handle a certain type of error gracefully, or there was bad data that had somehow gotten past our validation checks, or a particular cluster was under-resourced because it hadn’t been configured properly, and so on.

These are real issues that manifested as an outage, and they are worth spending the time to identify and follow up on. But these problems in isolation never tell the whole story of how the incident actually happened. As Woods explains in the excerpt of his talk above, because of the constraints we work under, we simply don’t have the time to harden the software we work on to the point where these problems don’t happen anymore. It’s just too expensive. And so, we make tradeoffs, we make judgments about where to best spend our time as we build, test, and roll out our stuff. The riskier we perceive a change, the more effort we’ll spend on validation and rollout of the change.

And so, if we focus only on issues with individual components, there’s so much we miss about the nature of failure in our systems. We miss looking at the unexpected interactions between the components that enabled the failure to happen. We miss how the organization’s prioritization decisions enabled the incident in the first place. We also don’t ask questions like “if we are going to do follow-up work to fix the component problems revealed by this incident, what are the things that we won’t be doing because we’re prioritizing this instead?” or “what new types of unexpected interactions might we be creating by making these changes?” Not to mention incident-handling questions like “how did we figure out something was wrong here?”

In the wake of an incident, if we focus only on the weaknesses of individual components then we won’t see the systemic issues. And it’s the systemic will continue to bite us long after we’ve implemented all of those follow-up action items. We’ll never see the forest for the trees.

Some thoughts on my presentation style

I recently gave a talk at SREcon Americas. The talk isn’t posted yet (edit: the talk has been posted) but you can find my slides here.

At this point in my career, I’ve developed a presentation style that I’m pretty happy with. This post captures some of my reflections on how I put together a slide deck.

One slide, one concept

My goal is to have every slide convey exactly one concept. (I think this aspect of my style was influenced by watching a Lawrence Lessig talk online).

As a consequence, my slides are never a list of bullets. My ideal slide is a single sentence:

Because there’s one sentence, the audience members don’t need to scan the slide to figure out which part is relevant to what I’m currently saying. It makes it less work for them.

Sometimes I break up a sentence across slides:

I’ll often use a sentence fragment rather than a full sentence.

If I really want to emphasize a concept, I’ll use an image as a backdrop.

Note that this style means that my slides are effectively incomprehensible as standalone documents: they only make sense in the context of my talks.

Giving hints about talk structure

I lose interest in a talk if I can’t tell where the speaker is going with it, and so I try to frame my talk at the beginning: I give people an overview of how my talk is structure. This is the “tell them what you’re going to tell them” part of the dictum: “tell them what you’re going to tell them, tell them, tell them what you told them.”

Note that this framing slide is the one place where I break my “no bullets” rule. When I summarize at the end of my talk, I dedicate a separate slide to each chunk. I also reword slightly.

Lots of little chunks

When I listen to a talk, my attention invariably wanders in and out, and I assume that’s happening to everyone in the audience as well. To deal with this, I try to structure my talk into many small, self-contained “chunks” of content, so that if somebody loses the thread during one chunk because of a distraction, then they can re-orient themselves when the next chunk comes along.

As an example, in the part of my talk that deals with change, I have one chunk about code freezes:

And then, a few slides later, I have another chunk which is a vignette about an SRE who identified the source of a breaking change on a hunch.

This effectively reduces the “blast radius” of losing the thread of the talk.

Hand-drawn diagrams

I draw all of my own diagrams with the GoodNotes app on an an iPad with an Apple Pencil. I use a Frunsi Drawing Tablet Stand to make it easier to draw on the iPad.

I was inspired to draw my diagrams after reading Dan Roam (especially his book The Back of the Napkin).

I’ve always thought of myself as being terrible at drawing, and so it took me a long time to get to a point where I would freehand draw diagrams instead of using software.

(Yes, tools like Excalidraw can do sketch-y style diagrams, but I still prefer doing them by hand).

It turns out that very simple doodles are surprisingly effective at conveying concepts. And the rawness of the diagrams works to their advantage.

The hardest part for me was to learn how to sketch faces. I found the book Pencil Me In by Christina Wodtke very helpful for learning different ways to do this.

Sketchy text style

I use a tool called Deckset for generating my slides from Markdown, and it has a style called sketchnote which gives the text that sketch-y vibe.


The sketchy-ness is most apparent in the cover slide

I really like this unpolished style. It makes the slides feel more intimate, and it meshes well with the diagrams that I draw.

Evoke a reaction

Back when I was an academic, my conference talks were on research papers that I had co-authored, and the advice I was given was “the talk is an ad for the paper”: don’t aim for completeness in the talk, instead, get people interested in enough in the material that they’ll read the paper.

Nowadays, my talks aren’t based on papers, but the corresponding advice I’d give is “evoke a reaction”. There’s a limit to the quantity of information that you can convey in a talk, so instead go for conveying a reaction. Get people to connect emotionally in some way, so that they’ll care enough about the material that they’ll follow up to learn more some time in the future. I’d rather my audience leave angry with what I presented than leave bored and retain nothing.

In practice, that means that I’m willing to risk over-generalizing for effect.

One sentence slides aren’t going to be effective at conveying nuance, but they can be very effective at evoking reactions.

Making peace with the imperfect nature of mental models

We all carry with us in our heads models about how the world works, which we colloquially refer to as mental models. These models are always incomplete, often stale, and sometimes they’re just plain wrong.

For those of us doing operations work, our mental models include our understanding of how the different parts of the system work. Incorrect mental models are always a factor in incidents: incidents are always surprises, and surprises are always discrepancies between our mental models and reality.

There are two things that are important to remember. First, our mental models are usually good enough for us to do our operations work effectively. Our human brains are actually surprisingly good at enabling us to do this stuff. Second, while a stale mental model is a serious risk, none of us have the time to constantly verify that all of our mental models are up to date. This is the equivalent of popping up an “are you sure?” modal dialog box before taking any action. (“Are you sure that pipeline that always deploys to the test environment still deploys to test first?”)

Instead, because our time and attention is limited, we have to get good at identifying cues to indicate that our models have gotten stale or are incorrect. But, since we won’t always get these cues, it’s inevitable that our mental models will go out of date. But that’s just an inevitable part of the job when you work in a dynamic environment. And we all work in dynamic environments.

You can’t see what you don’t have a frame of reference for

In my recent talk at the LFI conference, I referenced this quote from Karl Weick:

Quote from Karl E. Weick: "Seeing what one believs and not seeing that for which one has no beliefs are central to sensemaking".

I was reminded of that quote when reading a piece by the journalist Yair Rosenberg:

Rosenberg notes that this particular story hasn’t gotten much widespread press, and his theory is that the attacks, as well as other uncovered antisemitic events, don’t fit neatly into the established narrative frames of reference.

Such is the problem with events that don’t fit neatly into a bucket: we can’t discuss them effectively because they don’t fit into one of our pre-existing categories.

Good category, bad category (or: tag, don’t bucket)

The baby, assailed by eyes, ears, nose, skin, and entrails at once, feels it all as one great blooming, buzzing confusion; and to the very end of life, our location of all things in one space is due to the fact that the original extents or bignesses of all the sensations which came to our notice at once, coalesced together into one and the same space.

William James, The Principles of Psychology (1890)

I recently gave a talk at the Learning from Incidents in Software conference. On the one hand, I mentioned the importance of finding patterns in incidents:

But I also had some… unkind words about categorizing incidents.

We humans need categories to make sense of the world, to slice it up into chunks that we can handle cognitively. Otherwise, the world would just be, as James put it in the quote above, one great blooming, buzzing confusion. So, categorization is essential to humans functioning in the world. In particular, if we want to identify meaningful patterns in incidents, we need to do categorization work.

But there are different techniques we can use to categorize incidents, and some techniques are more productive than others.

The buckets approach

An incident must be placed in exactly one bucket

One technique is what I’ll call the buckets approach of categorization. That’s when you define a set up of categories up front, and then you assign each incident that happens to exactly one bucket. For example, you might have categories such as:

  • bug (new)
  • bug (latent)
  • deployment problem
  • third party

I have seen two issues with the bucketing approach. The first issue is that I’ve never actually seen it yield any additional insight. It can’t provide insights into new patterns because the patterns have already been predefined as the buckets. The best it can do is give you one type of filter to drill down and look at some more issues in more detail. There’s some genuine value in giving you a subset of related incidents to look more closely at, but in practice, I’ve rarely seen anybody actually do the harder work of looking at these subsets.

The second issue is that incidents, being messy things, often don’t fall cleanly into exactly one bucket. Sometimes they fall into multiple, and sometimes they don’t fall into any, and sometimes it’s just really unclear. For example, an issue may involve both a new bug and a deployment problem (as anyone who has accidentally deployed a bug to production and then gotten into even more trouble when trying to roll things back can tell you). The bucket approach forces you to discard information that is potentially useful in identifying patterns by requiring you to put the incident into exactly one bucket. This inevitably leads to arguments about whether an incident should be classified into bucket A or bucket B. This sort of argument is a symptom that this approach is throwing away useful information, and that it really shouldn’t go into a single bucket at all.

The tags approach

You may hang multiple tags on an incident

Another technique is what I’ll call the tags method of categorization. With the tags method, you annotate an incident with zero, one, or multiple tags. The idea behind tagging is that you want to let the details of the incident help you come up with meaningful categories. As incidents happen, you may come up with entirely new categories, or coalesce existing ones into one, or split them apart. Tags also let you examine incidents across multiple dimensions. Perhaps you’re interested in attributes of the people that are responding (maybe there’s a “hero” tag if there’s a frequent hero who comes in to many incidents), or maybe there’s production pressure related to some new feature being developed (in which case, you may want to label with both production-pressure and feature-name), or maybe it’s related to migration work (migration). Well, there are many different dimensions. Here are some examples of potential tags:

  • query-scope-accidentally-too-broad
  • people-with-relevant-context-out-of-office
  • unforeseen-performance-impact

Those example tags may seem weirdly specific, but that’s OK! The tags might be very high level (e.g., production-pressure) or very low level (e.g., pipeline-stopped-in-the-middle), or anywhere in between.

Top-down vs circular

The bucket approach is strictly top-down: you enforce a categorization on incidents from the top. The tags approach is a mix of top-down and bottom-up. When you start tagging, you’ll always start with some prior model of the types of tags that you think are useful. As you go through the details of incidents, new ideas for tags will emerge, and you’ll end up revising your set of tags over time. Someone might revisit the writeup for an incident that happened years ago, and add a new tag to it. This process of tagging incidents and identifying potential new tags categories will help you identify interesting patterns.

The tag-based approach is messier than the bucket-based one, because your collection of tags may be very heterogeneous, and you’ll still encounter situations where it’s not clear whether a tag applies or not. But it will yield a lot more insight.

The value of social science research

One of the benefits of basic scientific research is the potential for bringing about future breakthroughs. Fundamental research in the physical and biological sciences might one day lead to things like new sources of power, radically better construction materials, remarkable new medical treatments. Social scientific research holds no such promise. Work done in the social sciences is never going to yield, say, a new vaccine, or something akin to a transistor.

The statistician (and social science researcher) Andrew Gelman goes so far as to say, literally, that the social sciences are useless. But I’m being unfair to Gelman by quoting him selectively. He actually advocates for social science, not against it. Gelman argues that good social science is important, because we can’t actually avoid social science, and if we don’t have good social science research then we will be governed by bad social “science”. The full title of his blog post is The social sciences are useless. So why do we study them? Here’s a good reason.

We study the social sciences because they help us understand the social world and because, whatever we do, people will engage in social-science reasoning.

Andrew Gelman

I was reminded of this at the recent Learning from Incidents in Software conference when listening to a talk by Dr. Ivan Pupulidy titled Moving Gracefully from Compliance to Learning, the Beginning of Forest Service’s Learning Journey. Pupulidy is a safety researcher who worked at the U.S. Forest Service and is now a professor at University of Alabama, Birmingham.

In particular, it was this slide from Pupulidy’s talk that struck me right between the eyes.

Thanks to work done by Ivan Pupulidy, the Forest Service doesn’t look at incidents this way anymore

The text in yellow captures what you might call the “old view” of safety, where accidents are caused by people not following the rules properly. It is a great example of what I would call bad social science, which invariably leads to bad outcomes.

The dangers of bad social science have been known for a while. Here’s the economist John Maynard Keynes:

The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.

John Maynard Keynes, The General Theory of Employment, Interest and Money (February 1936)

Earlier and more succinctly, here’s the journalist H.L. Mencken:

Explanations exist; they have existed for all time; there is always a well-known solution to every human problem — neat, plausible, and wrong.

H.L. Mencken, “The Divine Afflatus” in New York Evening Mail (16 November 1917)

As Mencken notes, we can’t get away from explanations. But if we do good social science research, we can get better explanations. These explanations often won’t be neat, and may not even be plausible on first glance. But they at least give us a fighting chance at not being (completely) wrong about the social world.

The high cost of low ambiguity

Natural language is often frustratingly ambiguous.

Homonyms are ambiguous at the individual word level

There are scenarios where ambiguity is useful. For example, in diplomacy, the ambiguity of a statement can provide wiggle room so that the parties can both claim that they agree on a statement, while simultaneously disagreeing on what the statement actually means. (Mind you, this difference in interpretation might lead to problems later on).

Surprisingly, ambiguity can also be useful in engineering work, specifically around design artifacts. The sociologist Kathryn Henderson has done some really interesting research on how these ambiguous boundary objects are able to bridge participants from different engineering domains to work together, where each participant has a different, domain-specific interpretation of the artifact. For more on this, check out her paper Flexible Sketches and Inflexible Data Bases: Visual Communication, Conscription Devices, and Boundary Objects in Design Engineering and her book On Line and On Paper.

Humorists and poets have also made good use of the ambiguity of natural language. Here’s a classic example from Grouch Marx:

Groucho Marx was a virtuoso in using ambiguous wordplay to humorous effect

Despite these advantages, ambiguity in communication hurts more often than it helps. Anyone who has coordinated over Slack during the incident has felt the pain of the ambiguity of Slack messages. There’s the famous $5 million missing comma. As John Gall, the author of Systemantics, reminds us, ambiguity is often the enemy of effective communication.

This type of ambiguity is rarely helpful

Given the harmful nature of ambiguity in communication, the question arises: why is human communication ambiguous?

One theory, which I find convincing, is that communication is ambiguous because it reduces overall communication costs. In particular, it’s much cheaper for the speaker to make utterances that are ambiguous than to speak in a fully unambiguous way. For more details on this theory, check out the paper The communicative function of ambiguity in language by Piantodosi, Tily, and Gibson.

Note that there’s a tradeoff: easier for the speaker is more difficult for the listener, because the listener has to do the work of resolving the ambiguity based on the context. I think people pine for fully unambiguous forms of communication because they have experienced firsthand the costs of ambiguity as listeners but haven’t experienced what the cost to the speaker would be of fully unambiguous communication.

Even in mathematics, a field which we think of us being the most formal and hence unambiguous, mathematical proofs themselves are, in some sense, not completely formal. This has led to amusing responses from mathematical-inclined computer scientists who think that mathematicians are doing proofs in an insufficiently formal way, like Leslie Lamport’s How to Write a Proof and Guy Steele’s It’s Time for a New Old Language.

My claim is that the reason you don’t see mathematical proofs written in a formal-in-the-machine-checkable sense is that mathematicians have collectively come to the conclusion that it isn’t worth the effort.

The role of cost-to-the-speaker vs cost-to-the-listener in language ambiguity is a great example of the nature of negotiated boundaries. We are constantly negotiating boundaries with other people, and those negotiations are generally of the form “we both agree that this joint activity would benefit both of us, but we need to resolve how we are going to distribute the costs amongst us.” Human evolution has worked out how to distribute the costs of verbal communication between speaker and listener, and the solution that evolution hit upon involves ambiguity in natural language.

Why you can’t buy an observability solution off the shelf

An effective observability solution is one that helps an operator quickly answer the question “Why is my system behaving this way?” This is a difficult problem to solve in the general case because it requires the successful combination of three very different things.

Most talk about observability is around vendor tooling. I’m using observability tooling here to refer to the software that you use for collecting, querying, and visualizing signals. There are a lot of observability vendors these days (for example, check out Gartner’s APM and Observability category). But the tooling is only one part of the story: at best, it can provide you with a platform for building an observability solution. The quality of that solution is going to depend on how good your system model is and your cognitive & perceptual model is that it’s encoded in your solution.

A good observability solution will have embedded within it a hierarchical model of your system. This means it needs to provide information about what the system is supposed to do (functional purpose), the subsystems that make up the system, all of the way down to the low-level details of the components (specific lines of code, CPU utilization on the boxes, etc).

As an example, imagine your org has built a custom CI/CD system that you want to observe. Your observability solution should give you feedback about whether the system is actually performing its intended functions: Is it building artifacts and publishing them to the artifact repository? Is it promoting them across environments? Is it running verifications against them?

Now, imagine that there’s a problem with the verifications: it’s starting them but isn’t detecting that they’re complete. The observability solution should enable you to move from that function (verifications) to the subsystem that implements it. That subsystem is likely to include both application logic and external components (e.g., Temporal, MySQL). Your observability solution should have encoded within it the relationships between the functional aspects and the relevant implementations to support the work of moving up and down the abstraction hierarchy while doing the diagnostic work. Building this model can’t be outsourced to a vendor: ultimately, only the engineers who are familiar with the implementation details of the system can create this sort of hierarchical model.

Good observability tooling and a good system model alone aren’t enough. You need to understand how operators do this sort of diagnostic work in order to build an interface that supports this sort of work well. That means you need a cognitive model of the operator to understand how the work gets done. You also need a perceptual model of how it is that humans effectively navigate through the world so you can leverage this model to build an interface that will enable operators to move fluidly across data sources from different systems. You need to build an operational solution with high visual momentum to avoid an operator needing to navigate through too many displays.

Every observability solution has an implicit system model and an implicit cognitive & perceptual model. However, while work on the abstraction hierarchy dates back to the 1980s, and visual momentum goes back to the 1990s, I’ve never seen this work, or the field of cognitive systems engineering in general, explicitly referenced. This work remains largely unknown in the tech world.