Who’s afraid of serializability?

Kyle Kingsbury’s Jepsen recently did an analysis of PostgreSQL 12.3 and found that under certain conditions it violated guarantees it makes about transactions, including violations of the serializability transaction isolation level.

I thought it would be fun to use one of his counterexamples to illustrate what serializable means.

Here’s one of the counterexamples that Jepsen’s tool, Elle, found:

In this counterexample, there are two list objects, here named 1799 and 1798, which I’m going to call x and y. The examples use two list operations, append (denoted "a") and read (denoted "r").

Here’s my redrawing of the example. I’ve drawn all operations against x in blue and against y in red. Note that I’m using empty list ([]) instead of nil.

There are two transactions, which I’ve denoted T1 and T2, and each one involves operations on two list objects, denoted x and y. The lists are initially empty.

For transactions that use the serializability isolation model, all of the operations in all of the transactions have to be consistent with some sequential ordering of the transactions. In this particular example, that means that all of the operations have to make sense assuming either:

  • all of the operations in T1 happened before all of the operations in T2
  • all of the operations in T2 happened before all of the operations in T1

Assume order: T1, T2

If we assume T1 happened before T2, then the operations for x are:

      x = [] 
T1:   x.append(2)
T2:   x.read() → []

This history violates the contract of a list: we’ve appended an element to a list but then read an empty list. It’s as if the append didn’t happen!

Assume order: T2, T1

If we assume T2 happened before T1, then the operations for y are:

      y = []
T2:   y.append(4)
      y.append(5)
      y.read() → [4, 5]
T1:   y.read() → []

This history violates the contract of a list as well: we read [4, 5] and then [ ]: it’s as if the values disappeared!

Kingsbury indicates that this pair of transactions are illegal by annotating the operations with arrows that show required orderings. The "rw" arrow means that the read operation that happened in the tail must be ordered before the write operation at the head of the arrow. If the arrows form a cycle, then the example violates serializability: there’s no possible ordering that can satisfy all of the arrows.

Serializability, linearizability, locality

This example is a good illustration of how serializability differs from linearizability. Lineraizability is a consistency model that also requires that operations must be consistent with sequential ordering. However, linearizability is only about individual objects, where transactions refer to collections of objects.

(Linearizability also requires that if operation A happens before operation B in time, then operation A must take effect before operation B, and serializability doesn’t require that, but let’s put that aside for now).

This counterexample above is a linearizable history: we can order the operations such that they are consistent with the contracts of x and y. Here’s an example of a valid history, which is called a linearization:

x = []
y = []
x.read() → []
x.append(2)
y.read() → []
y.append(4)
y.append(5)
y.read() → [4, 5]

Note how the operations between the two transactions are interleaved. This is forbidden by transactional isolation, but the definition of linearizability does not take into account transactions.

This example demonstrates how it’s possible to have histories that are linearizable but not serializable.

We say that lineariazibility is a local property where serializability is not: by the definition of linearizability, we can identify if a history is linearizable by looking at the histories of the individual objects (x, y). However, we can’t do that for serializability.

SRE, CSE, and the safety boundary

Site reliability engineering (SRE) and cognitive systems engineering (CSE) are two fields seeking the same goal: helping to design, build, and operate complex, software-intensive systems that stay up and running. They both worry about incidents and human workload, and they both reason about systems in terms of models. But their approaches are very different, and this post is about exploring one of those differences.

Caveat: I believe that you can’t really understand a field unless you either have direct working experience, or you have observed people doing work in the field. I’m not a site reliability engineer or a cognitive systems engineer, nor have I directly observed SREs or CSEs at work. This post is an outsider’s perspective on both of these fields. But I think it holds true to the philosophies that these approaches espouse publicly. Whether it corresponds to the actual day-to-day work of SREs and CSEs, I will leave to the judgment of the folks on the ground who actually do SRE or CSE work.

A bit of background

Site reliability engineering was popularized by Google, and continues to be strongly associated with the company. Google has published three O’Reilly books, the first one in 2016. I won’t say any more about the background of SRE here, but there are many other sources (including the Google books) for those who want to know more about the background.

Cognitive systems engineering is much older, tracing its roots back to the early eighties. If SRE is, as Ben Treynor described it what happens when you ask a software engineer to design an operations function, then CSE is what happens when you ask a psychologist how to prevent nuclear meltdowns.

CSE emerged in the wake of the Three Mile Island accident of 1979, where researchers were trying to make sense of how the accident happened. Before Three Mile Island, research on "human factors" aspects of work had focused on human physiology (for example, designing airplane cockpits), but after TMI the focused expanded to include cognitive aspects of work. The two researchers most closely associated with CSE, Erik Hollnagel and David Woods, were both trained as psychology researchers: their paper Cognitive Systems Engineering: New wine in new bottles marks the birth of the field (Thai Wood covered this paper in his excellent Resilience Roundup newsletter).

CSE has been applied in many different domains, but I think it would be unknown in the "tech" community were it not for the tireless efforts of John Allspaw to popularize the results of CSE research that has been done in the past four decades.

A useful metaphor: Rasmussen’s dynamic safety model

Jens Rasmussen was a Danish safety researcher whose work remains deeply influential in CSE. In 1997 he published a paper titled Risk management in a dynamic society: a modelling problem. This paper introduced the metaphor of the safety boundary, as illustrated in the following visual model, which I’ve reproduced from this paper:

Rasmussen viewed a safety-critical system as a point that moves inside of a space enclosed by three boundaries.

At the top right is what Rasmussen called the "boundary to economic failure". If the system crosses this boundary, then the system will fail due to poor economic performance. We know that if we try to work too quickly, we sacrifice safety. But we can’t work arbitrarily slowly to increase safety, because then we won’t get anything done. Management naturally puts pressure on the system to move away from this boundary.

At the bottom right is what Rasmussen called the "boundary of unacceptable work load". Management can apply pressure on the workforce to work both safely and quickly, but increasing safety and increasing productivity both require effort on behalf of practitioners, and there are limits to the amount of work that people can do. Practitioners naturally put pressure on the system to move away from this boundary.

At the left, the diagram has two boundaries. The outer boundary is what Rasmussen called the "boundary of functionally acceptable performance", what I’ll call the safety boundary. If the system crosses this boundary, an incident happens. We can never know exactly where this boundary is. The inner boundary is labelled "resulting perceived boundary of acceptable performance". That’s where we think the boundary is, and where we try to stay away from.

SRE vs CSE in context of the dynamic safety model

I find the dynamic safety model useful because I think it illustrates the difference in focus between SRE and CSE.

SRE focuses on two questions:

  1. How do we keep the system away from the safety boundary?
  2. What do we do once we’ve crossed the boundary?

To deal with the first question, SRE thinks about issues such as how to design systems and how to introduce changes safely. The second question is the realm of incident response.

CSE, on the other hand, focuses on the following questions:

  1. How will the system behave near the system boundary?
  2. How should we take this boundary behavior into account in our design?

CSE focuses on the space near the boundary, both to learn how work is actually done, and to inform how we should design tools to better support this work. In the words of Woods and Hollnagel:

> Discovery is aided by looking at situations that are near the margins of practice and when resource saturation is threatened (attention, workload, etc.). These are the circumstances when one can see how the system stretches to accommodate new demands, and the sources of resilience that usually bridge gaps. – Joint Cognitive Systems: Patterns in Cognitive Systems Engineering, p37

Fascinatingly, CSE has also identified common patterns of system behavior at the boundary that holds across multiple domains. But that will have to wait for a different post.

Reading more about CSE

I’m still a novice in the field of cognitive systems engineering. I’m actually using these posts to help learn through explaining the concepts to others.

The source I’ve found most useful so far is the book Joint Cognitive Systems: Patterns in Cognitive Systems Engineering , which is referenced in this post. If you prefer videos, Cook’s Lectures on the study of cognitive work is excellent.

I’ve also started a CSE reading list.

This mess we’re in

Most real software systems feel “messy” to the engineers who work on them. I’ve found that software engineers tend to fall into one of two camps on the nature of this messiness.

Camp 1: Problems with the design

One camp believes that the messiness is primarily related to sub-optimal design decisions. These design decisions might simply be poor decisions, or they might be because we aren’t able to spend enough time getting the design right.

My favorite example of this school of thought can be found in the text of Leslie Lamport’s talk entitled The Future of Computing: Logic or Biology:

The best way to cope with complexity is to avoid it. Programs that do a lot for us are inherently complex. But instead of surrendering to complexity, we must control it. We must keep our systems as simple as possible. Having two different kinds of drag-and-drop behavior is more complicated than having just one. And that is needless complexity.

We must keep our systems simple enough so we can understand them.
And the fundamental tool for doing this is the logic of mathematics.

Leslie Lamport, The Future of Computing: Logic or Biology

Camp 2: The world is messy

Engineers in the second camp believe that reality is just inherently messy, and that mess ends up being reflected in software systems that have to model the world. Rich Hickey describes this in what he calls “situated programs” (emphasis mine)

And they [situated programs] deal with real-world irregularity. This is the other thing I think that’s super-critical, you know, in this situated programming world. It’s never as elegant as you think, the real-world.

And I talked about that scheduling problem of, you know, those linear times, somebody who listens all day, and the somebody who just listens while they’re driving in the morning and the afternoon. Eight hours apart there’s one set of people and, then an hour later there’s another set of people, another set. You know, you have to think about all that time. You come up with this elegant notion of multi-dimensional time and you’d be like, “oh, I’m totally good…except on Tuesday”. Why? Well, in the U.S. on certain kinds of genres of radio, there’s a thing called “two for Tuesday”. Right? So you built this scheduling system, and the main purpose of the system is to never play the same song twice in a row, or even pretty near when you played it last. And not even play the same artist near when you played the artist, or else somebody’s going to say, “all you do is play Elton John, I hate this station”.

But on Tuesday, it’s a gimmick. “Two for Tuesday” means, every spot where we play a song, we’re going to play two songs by that artist. Violating every precious elegant rule you put in the system. And I’ve never had a real-world system that didn’t have these kinds of irregularities. And where they weren’t important.

Rich Hickey, Effective Programs: 10 Years of Clojure

It will come as no surprise to readers of this blog that I fall into the second camp. I do think that sub-optimal design decisions also contribute to messiness in software systems, but I think those are inevitable because unexpected changes and time pressures are inescapable. This is the mess we’re in.

Why you can’t just ask “why”

Today, most AI work is based on neural networks, but back in the 1980s, AI researchers were using a different approach: they built rule-based systems using mathematical logic. This was the heyday of Lisp and Prolog, which were well-suited towards implementing these systems.

One approach AI researchers used was to sit down with an expert and elicit the rules they used to perform a task. For example, an AI researcher might conduct a series of interviews with a doctor in order to determine how the doctor diagnosed illnesses based on symptoms. The researcher would then encode those rules to build an expert system: a software package that would, ideally, perform tasks as well as an expert.

Alas, the results were disappointing: these expert systems never measured up to the performance of those human experts. Two brothers: Stuart Dreyfus (a professor of industrial engineering and operations research) and Hubert Dreyfus (a professor of philosophy) published a book in 1998 titled Mind Over Machine that described why this approach to building expert systems by eliciting and encoding rules from experts could never really work. It turns out that experts don’t actually solve problems by following a set of rules. Instead, they rely more on intuition and pattern-matching based on a repertoire of cases they’ve built up from their experience1.

Yet, even though those experts didn’t solve problems by following rules, they were still able to articulate a set of rules that they claim to follow when asked. And they weren’t trying to deceive the AI researchers. Instead, something else was going on. The experts were inventing explanations without even being aware that they were doing so. Philosophers of mind use the term confabulation (technically broad confabulation) to refer to this phenomenon: how people will unknowingly fabricate explanations for their actions.

And therein lies the problem of asking “why”.

In the wake of an incident, we often want to understand why it is people did certain things: both for the people whose actions contributed to the incident (why did they make a global configuration change?) and for people whose actions mitigated the incident (why did they suspect a retry storm rather than a DDOS attack?)

The problem is, you can’t just ask people why, because people confabulate. You can, of course, simply ask people why they took the actions they did. Heck, you might even get a confident, articulated explanation. But you shouldn’t believe that the explanation they give corresponds to reality.

Yet, getting at the why is important. This is not a case of “‘Why?’ is the wrong question the way that Five Whys style questions are. There is real value in understanding how people came to the decisions they did, by learning about the signals they received at the time, and how their previous experiences shaped their perspectives. That’s where having a skilled interviewer comes in.

A skilled interviewer will increase the chances of getting an accurate response by asking questions to bring the interviewee back into the frame of mind that they were in during the incident. Instead of asking for an engineer to explain their actions (Why did you do X?), they’ll ask questions to try to jog their memory of what they were experiencing during the incident: What were you doing when the page went off? Where did you look first? What did you see? And then what did you do? Because we know that experts do pattern-matching, they’ll also ask questions like, have you ever seen this symptom before? These questions can elicit responses about previous experiences they’ve had in similar situations, which can provide context on how they made their decisions in this case.

Eliciting this sort of information from an interview is hard, and it takes real skill. We should take this sort of work seriously.

1The field of research known as naturalistic decision making studies how experts make decisions.

Asking the right “why” questions

In the [Cognitive Systems Engineering] terminology, it is more important to understand what a joint cognitive system (JCS) does and why it does it, than to explain how it does it. [emphasis in the original]

Erik Hollnagel & David D. Woods, Joint Cognitive Systems: Foundations of Cognitive Systems Engineering, p22

In my previous post, I linked to a famous essay by John Allspaw: The Infinite Hows (or, the Dangers Of The Five Whys). The main thrust of Allspaw’s essay can be summed up in this five word excerpt:

“Why?” is the wrong question.

As illustrated by the quote from Hollnagel & Woods at the top of this post, it turns out that cognitive systems engineering (CSE) is very big on answering “why” questions. Allspaw’s perspective on incident analysis is deeply influenced by research from cognitive systems engineering. So what’s going on here?

It turns out that the CSE folks are asking different kinds of “why” questions than the root cause analysis (RCA) folks. The RCA folks ask why did this incident happen? The CSE folks ask why did the system adapt the sorts of behaviors that contributed to the incident?

Those questions may sound similar, but they start from opposite assumptions. The RCA folks start with the assumption that there’s some sort of flaw in the system, a vulnerability that was previously unknown, and then base their analysis on identifying what that vulnerability was.

The CSE folks, on the other hand, start with the assumption that behaviors exhibited by the system developed through adaptation to existing constraints. The “why” question here is “why is this behavior adaptive? What purpose does it serve in the system?” Then they base the analysis on identifying attributes of the system such as constraints and goal conflicts that would explain why this behavior is adaptive.

This is one of the reasons why the CSE folks are so interested in incidents to begin with: because it can expose these kinds of constraints and conflicts that are part of the context of a system. It’s similar to how psychologists use optical illusions to study the heuristics that the human visual system employs: you look at the circumstances under which a system fails to get some insight into how it normally functions as well as it does.

“Why” questions can be useful! But you’ve got to ask the right ones.

Making peace with “root cause” during anomaly response

We haven’t figured out the root cause yet.

Uttered by many an engineer while responding to an anomaly

One of the contributions of cognitive systems engineering is treating anomaly response as something worthy of study. Here’s how Woods and Hollnagel describe it:

In anomaly response, there is some underlying process, an engineered or physiological process which will be referred to as the monitored process, whose state changes over time. Faults disturb the functions that go on in the monitored process and generate the demand for practitioners to act to compensate for these disturbances in order to maintain process integrity―what is sometimes referred to as “safing” activities. In parallel, practitioners carry out diagnostic activities to determine the source of the disturbances in order to correct the underlying problem..

David D. Woods, Erik Hollnagel, Joint Cognitive Systems: Patterns in Cognitive Systems Engineering, Chapter 8, p71

This type of work will be instantly recognizable to anyone who has been involved in software operations work, even though the domains that cognitive systems engineering researchers initially focused on are completely different (e.g., nuclear power plants, anesthesiology, commercial aviation, space flight).

Anomaly response involves multiple people working together, coordinating on resolving a common problem. Here’s Woods and Hollnagel again, discussing an exchange between two anesthesiologists during a neurosurgery case:

The situation calls for an update to the shared model of the case and its likely trajectory in the future … The exchange is very compact and highly coded, yet it serves to update the common ground previously established at the start of the case… Interestingly, the resident and attending after the update appear to be without candidate explanations as several possibilities have been dismissed given other findings (the resident is quite explicit in this case. After describing the unexpected event, he also adds―”but no explanation”.

p93, ibid

Just like those anesthesiologists, practitioners in all domains often communicate using “compact and highly coded” jargon. I’ve seen claims that jargon is intended to obfuscate, but it’s just the opposite during anomaly response: a team that shares jargon can communicate more efficiently, because of the pre-existing shared context about the precise meaning of those terms (assuming, of course, the team members understand those terms the same way).

That brings us to the “root cause”. Let’s start with a few words about root cause analysis.

Root cause analysis (RCA) is an approach for identifying why an incident happened. It’s often associated with the Five Whys approach, associated with Toyota. Members of the resilience engineering community have been very critical of RCA. For one critical take, check out John Allspaw’s piece The Infinite Hows (or, the Dangers of The Five Whys). Allspaw makes a compelling case, and I agree with him.

What I’m arguing in this post is that the term “root cause” has a completely different connotation when used during anomaly response than when used during post-incident analysis. When, during anomaly response, an engineer says “I haven’t found the root cause”, they do not mean, “I have not yet performed a Five-Whys root cause analysis”. Instead, they mean “the signals I am observing are inconsistent with my mental model of how the system behaves”. Or, less formally, “I know something’s wrong here, but I don’t know what it is!”

When an engineer says “we don’t know the root cause yet” during anomaly response, everybody involved understands what they mean. If you were to reply “actually, there is no such thing as root cause”, the best response you could hope for is a blank stare. The engineers aren’t talking about Five-Whys in this context. Instead, they’re doing dynamic fault management. They’re trying to make sense of what’s currently happening.

Because I’m one of those folks who is critical of RCA, I used to try to encourage people to say, “we don’t understand the failure mode yet” instead of “we don’t know the root cause yet”. But I’m going to stop encouraging them, because I have come around to believing that “we don’t know the root cause” is effective jargon when coordinating during anomaly response.

The post-incident investigation context is another story entirely, and I’m still going to fight the battles against root cause during the post-incident work. But, just as I wouldn’t try to do a one-on-one interview with an engineer while they were engaged in an incident, I’m no longer going to try do get engineers to stop saying root cause while they are engaged in an incident. If the experts at anomaly response find it a useful phrase while they are doing their work, we should recognize this as a part of their expertise.

Yes, it will probably make it a little harder to get an organization to shake off the notion of “root cause” if people still freely use the term during anomaly response. And I won’t use the term myself. But it’s a battle I no longer think is worth fighting.

The hard parts about making it look easy

Bill Clinton was known for projecting warmth in a way that Hillary Clinton didn’t. Yet, when journalists would speak to people who knew them both personally, the story they’d get back was the opposite: one-on-one, it was Hillary Clinton who was the warm one.

We use terms like warmth and authenticity as if they were character attributes of people. But imagine if you were asked to give a speech in front of a large audience. Do you think that if you came off as wooden or stilted, that would be an indicator of how authentic you are as a person?

The ability to project authenticity or warmth is a skill. Experts exhibiting skilled behavior often appear to do it effortlessly. When we watch a virtuoso perform in a domain we know something about, we exclaim “they make it look so easy!“, because we know how much harder it is than it looks.

The resilience engineering researcher David Woods calls this phenomenon the law of fluency, which he define as:

“Well”-adapted work occurs with a facility that belies the difficulty of the demands resolved and the dilemmas balanced.

Joint Cognitive Systems: Patterns in Cognitive Systems Engineering (p20)

This law is the source of two problems.

First of all, novices tend to mistake skilled performance that seems effortless as innate, rather than a skill that was developed with practice. They don’t see the work, so they don’t know how to get there.

Second of all, skilled practitioners are at increased risk of undetected burnout because they make it look easy even when they are working too hard. This is something that’s easy to miss unless we actively probe for it.

On the writing styles of some resilience engineering researchers

This post is a brief meditation in the writing styles of four luminaries in the field of resilience engineering: Drs. Erik Hollnagel, David Woods, Sidney Dekker, and Richard Cook.

This post was inspired by a conversation I had with my colleague J. Paul Reed. You can find links to papers by these authors at resiliencepapers.club.

Erik Hollnagel – the framework builder

Hollnagel often writes about frameworks or models. A framework is the sort of thing that you would illustrate with a box and arrow diagram, or a table with two or three columns. Here are some examples of Hollnagellian frameworks:

  • Safety-I vs. Safety-II
  • Functional Resonance Analysis Method (FRAM)
  • Resilience Analysis Grid (RAG)
  • Contextual Control Model (COCOM)
  • Cognitive Reliability and Error Analysis Method (CREAM)

Of the four researchers, Hollnagel’s writings read the most like traditional academic writing. Even his book Joint Cognitive Systems: Foundations of Cognitive Systems Engineering feels like something out of an academic journal. Of the four authors, he is the one I struggle the most with to gain insight from. Ironically, one of my favorite concepts I learned from him, the ETTO principle, is presented more as a pattern in the style of Woods, as described below.

David Woods – the pattern oracle

I believe that a primary goal of academic research is to identify patterns in the world that had not been recognized before. By this measure, David Woods is the most productive researcher I have encountered, in any field! Again and again, Woods identifies patterns inherent in the nature of how humans work and interact with technology, by looking across an extremely broad range of human activity, from power plant controllers to astronauts to medical doctors. Gaining insight from his work is like discovering there’s a white arrow in the FedEx logo: you never imagined it was before it was pointed out, and now that you know it’s impossible not to see.

These patterns are necessarily high-level, and Woods invents new vocabulary out of whole cloth to capture these new concepts. His writing contains terms like anomaly response, joint cognitive systems, graceful extensibility, units of adaptive behavior, net adaptive value, crunches, competence envelopes, dynamic fault management, adaptive stalls, and veils of fluency.

In Woods’s writing, he often introduces or references many new concepts, and writes about how they interact with each other. This style of writing tends to be very abstract. I’ve found that if I can map the concepts back into my own experiences in the software world, then I’m able to internalize them and they become powerful tools in my conceptual toolbox. But if I can’t make the connection, then I find it hard to find a handhold in scaling his writing. It wasn’t until I watched his video lectures, where he discussed many concrete examples, that I was able to really to understand many of his concepts.

Sidney Dekker – the public intellectual

Of the four researchers, Dekker produces the most amount of work written for a lay audience. My entrance into the world of resilience engineering was through his book Drift into Failure. Dekker’s writings in these books tend toward the philosophical, but they don’t read like academic philosophy papers. Rather, it’s more of the “big idea” kind of writing, similar in spirit (although not in tone) to the kinds of books that Nassim Taleb writes. In that sense, Dekker’s writing can go even broader than Woods’s, as Dekker muses on the perception of reality. He is the only one I can imagine writing books with titles such as Just Culture, The Safety Anarchist, or The End of Heaven.

Dekker often writes about how different worldviews shape our understanding of safety. For example, one of his more well-known papers contrasts “new” and “old” views on the nature of human error. In Drift Into Failure, he write about the Newtonian-Cartesian worldview and contrast it with a systems perspective. But he doesn’t present these worldviews as frameworks in the way that Hollnagel would. They are less structured, more qualitatively elaborated.

I’m a fan of the “big idea” style of non-fiction writing, and I was enormously influenced by Drift into Failure, which I found extremely readable. However, I’m particularly receptive to this style of writing, and most of my colleagues tend to prefer his Field Guide to Understanding ‘Human Error’, which is more practical.

Richard Cook – the raconteur

Cook’s most famous paper is likely How Complex Systems Fail, but that style of writing isn’t what comes to mind when I think of Cook (that paper is more of a Woods-ian identification of patterns).

Cook is the anti-Hollnagel: where Hollnagel constructs general frameworks, Cook elaborates the details of specific cases. He’s a storyteller, who is able to use stories to teach the reader about larger truths.

Many of Cook’s papers examine work in the domain of medicine. Because Cook has a medical background (he was a practicing anesthesiologist before he was a researcher), he has deep knowledge of that domain and is able to use it to great effect in his analysis on the interactions between humans, technology, and work. A great example of this is how his paper on the allocation of ICU beds in Being Bumpable. His Re-Deploy talk entitled The Resilience of Bone and Resilience Engineering is another example of leveraging the details of a specific case to illustrate broader concepts.

Of the four authors, I think that Cook is the one who is most effective at using specific cases to explain complex concepts. He functions almost as interpreter for grounding Woods-ian concepts in concrete practice. It’s a style of writing that I aspire to. After all, there’s no more effective way to communicate than to tell a good story.

Chasing down the blipperdoodles

To a first approximation, there are two classes of automated alerts:

  1. A human needs to look at this as soon as possible (page the on-call!)
  2. A human should eventually investigate, but it isn’t urgent (email-only alert)

This post is about the second category. These are events like the error spikes that happened at 2am that can wait until business hours to look into.

When I was on the CORE1 team, one of the responsibilities of team members was to investigate these non-urgent alert emails. The team colorfully referred to them as blipperdoodles2, presumably because they look like blips on the dashboard.

I didn’t enjoy this part of the work. Blipperdoodles can be a pain to track down, are often not actionable (e.g., networking transient), and, in the tougher cases, are downright impossible to make sense of. This means that the work feels largely unsatisfying. As a software engineer, I’ve felt a powerful instinct to dismiss transient errors, often with a joke about cosmic rays.

But I’ve really come around on the value of chasing down blipperdoodles. Looking back, they gave me an opportunity to practice doing diagnostic work, in a low-stakes environment. There’s little pressure on you when you’re doing this work, and if something more urgent comes up, the norms of the team allow you to abandon your investigation. After all, it’s just a blipperdoodle.

Blipperdoodles also tend to be a good mix of simple and difficult. Some of them are common enough that experienced engineers can diagnose them by the shape of the graphs. Others are so hard that an engineer has to admit defeat once they reach their self-imposed timebox for the investigation. Most are in between.

Chasing blipperdoodles is a form of operational training. And while it may be frustrating to spend your time tracking down anomalies, you’ll appreciate the skills you’ve developed when the heat is on, which is what happens when everything is on fire.

1 CORE stands for Critical Operations & Reliability Engineering. They’re the centralized incident management team at Netflix.

2I believe Brian Trump coined the term.

The inevitable double bind

Here are three recent COVID-19 news stories:

The first two stories are about large organizations (the FDA, large banks) moving too slowly in order to comply with regulations. The third story is about the risks of the FDA moving too quickly.

Whenever an agent is under pressure to simultaneously act quickly and carefully, they are faced with a double-bind. If they proceed quickly and something goes wrong, they will be faulted for not being careful enough. If they proceed carefully and something goes wrong, they will be faulted for not moving quickly enough.

In hindsight, it’s easy to identify who wasn’t quick enough and who wasn’t careful enough. But if you want to understand how agents make these decisions, you need to understand the multiple pressures that agents experience, because they are trading these off. You also need to understand what information they had available at the time, as well as their previous experiences. I thought this observation of the behavior of the banks was particularly insightful.

But it does tell a more general story about the big banks, that they have invested so much in at least the formalities of compliance that they have become worse than small banks at making loans to new customers.

Matt Levine

Reactions to previous incidents have unintended consequences to the future. The conclusion to draw here isn’t that “the banks are now overregulated”. Rather, it’s that double binds are unavoidable: we can’t eliminate them by adding or removing regulations. There’s no perfect knob setting where they don’t happen anymore.

Once we accept that double binds are inevitable, we can shift of our focus away from just adjusting the knob and towards work that will prepare agents to make more effective decisions when they inevitably encounter the next double bind.