All we can do is find problems

I’m in the second week of the three week virtual MIT STAMP workshop. Today, Prof. Nancy Leveson gave a talk titled Safety Assurance (Safety Case): Is it Possible? Feasible? Safety assurance refers to the act of assuring that a system is safe, after the design has been completed.

Leveson is a skeptic of evaluating the safety of a system. Instead, she argues for focusing on generating safety requirements at the design stage so that safety can be designed in, rather than doing an evaluation post-design. (You can read her white paper for more details on her perspective). Here are the last three bullets from her final slide:

  • If you are using hazard analysis to prove your system is safe, then you are using it wrong and your goal is futile
  • Hazard analysis (using any method) can only help you find problems, it cannot prove that no problems exist
  • The general problem is in setting the right psychological goal. It should not be “confirmation,” but exploration

This perspective resonated with me, because it matches how I think about availability metrics. You can’t use availability metrics to inform you about whether your system is reliable enough, because they can only tell you if you have a problem. If your availability metrics look good, that doesn’t tell you anything about how to spend your engineering resources on reliability.

As Leveson remarked about safety, I think the best we can do in our non-safety-critical domains is study our systems to identify where the potential problems are, so that we can address them. Since we can’t actually quantify risk, the best we can do is to get better at identifying systemic issues. We need to always be looking for problems in the system, regardless of how many nines of availability we achieved last quarter. After all, that next major outage is always just around the corner.

The power of functionalism

Most software engineers are likely familiar with functional programming. The idea of functionalism, focusing on the “what” rather than the “how”, doesn’t just apply to programming. I was reminded of how powerful a functionalist approach is this week as while I’ve been attending the STAMP workshop. STAMP is an approach to systems safety developed by Nancy Leveson.

The primary metaphor in STAMP is the control system: STAMP employs a control system model to help reason about the safety of a system. This is very much a functionalist approach, as it models agents in the system based only on what control actions they can take and what feedback they can receive. You can use this same model to reason about a physical component, a software system, a human, a team, an organization, even a regulatory body. As long as you can identify the inputs your component receives, and the control actions that it can perform, you can model it as a control system.

Cognitive systems engineering (CSE) uses a different metaphor: that of a cognitive system. But CSE also takes a functional approach, observing how people actually work and trying to identify what functions their actions serve in the system. It’s a bottom-up functionalism where STAMP is top-down, so it yields different insights into the system.

What’s appealing to me about these functionalist approaches is that they change the way I look at a problem. They get me to think about the problem or system at hand in a different way than I would have if I didn’t take a deliberately take a functional approach. And “it helped me look at the world in a different way” is the highest compliment I can pay to a technology.

“How could they be so stupid?”

From the New York Times story on the recent Twitter hack:

Mr. O’Connor said other hackers had informed him that Kirk got access to the Twitter credentials when he found a way into Twitter’s internal Slack messaging channel and saw them posted there, along with a service that gave him access to the company’s servers. 

It’s too soon after this incident to put too much faith in the reporting, but let’s assume it’s accurate. A collective cry of “Posting credentials to a Slack channel? How could engineers at Twitter be so stupid?” rose up from the internet. It’s a natural reaction, but it’s not a constructive one.

I don’t personally know any engineers at Twitter, but I have confidence that they have excellent engineers over there, including excellent security folks. So, how do we explain this seemingly obvious security lapse?

The problem is that we on the outside can’t, because we don’t have enough information. This type of lapse is a classic example of a workaround. People in a system use workarounds (they do things the “wrong” way) when there are obstacles to doing things the “right” way.

There are countless possibilities for why people employ workarounds. Maybe some system that’s required for doing it the “right” way is down for some reason, or maybe it simply takes too long or is too hard to do things the “right” way. Combine that with production pressures, and a workaround is born.

I’m willing to bet that there are people in your organization that use workarounds. You probably use some yourself. Identifying those workarounds teaches us something about how the system works, and how people have to do things the “wrong” way to actually get their work done.

Some workarounds, like the Twitter example, are dangerous. But simply observing “they shouldn’t have done that” does nothing to address the problems in the system that motivated the workaround in the first place.

When you see a workaround, don’t ask “how could they be so stupid to do things the obviously wrong way?” Instead, ask “what are the properties of our system that contributed to the development of this workaround?” Because, unless you gain a deeper understanding of your system, the problems that motivated the workaround aren’t going to go away.

A reasonable system

Reasonable is an adjective we typically apply to humans, or something we implore of them (“Be reasonable!”). And, while I do want reasonable colleagues, what I really want is a reasonable system.

By reasonable system, I mean a system whose behavior I can reason about, both backwards and forwards in time. Given my understanding of how the system works, and the signals that are emitted by the system, I want to be able to understand its past behavior, and predict what its behavior is going to be in the future.

Who’s afraid of serializability?

Kyle Kingsbury’s Jepsen recently did an analysis of PostgreSQL 12.3 and found that under certain conditions it violated guarantees it makes about transactions, including violations of the serializability transaction isolation level.

I thought it would be fun to use one of his counterexamples to illustrate what serializable means.

Here’s one of the counterexamples that Jepsen’s tool, Elle, found:

In this counterexample, there are two list objects, here named 1799 and 1798, which I’m going to call x and y. The examples use two list operations, append (denoted "a") and read (denoted "r").

Here’s my redrawing of the example. I’ve drawn all operations against x in blue and against y in red. Note that I’m using empty list ([]) instead of nil.

There are two transactions, which I’ve denoted T1 and T2, and each one involves operations on two list objects, denoted x and y. The lists are initially empty.

For transactions that use the serializability isolation model, all of the operations in all of the transactions have to be consistent with some sequential ordering of the transactions. In this particular example, that means that all of the operations have to make sense assuming either:

  • all of the operations in T1 happened before all of the operations in T2
  • all of the operations in T2 happened before all of the operations in T1

Assume order: T1, T2

If we assume T1 happened before T2, then the operations for x are:

      x = [] 
T1:   x.append(2)
T2:   x.read() → []

This history violates the contract of a list: we’ve appended an element to a list but then read an empty list. It’s as if the append didn’t happen!

Assume order: T2, T1

If we assume T2 happened before T1, then the operations for y are:

      y = []
T2:   y.append(4)
      y.append(5)
      y.read() → [4, 5]
T1:   y.read() → []

This history violates the contract of a list as well: we read [4, 5] and then [ ]: it’s as if the values disappeared!

Kingsbury indicates that this pair of transactions are illegal by annotating the operations with arrows that show required orderings. The "rw" arrow means that the read operation that happened in the tail must be ordered before the write operation at the head of the arrow. If the arrows form a cycle, then the example violates serializability: there’s no possible ordering that can satisfy all of the arrows.

Serializability, linearizability, locality

This example is a good illustration of how serializability differs from linearizability. Lineraizability is a consistency model that also requires that operations must be consistent with sequential ordering. However, linearizability is only about individual objects, where transactions refer to collections of objects.

(Linearizability also requires that if operation A happens before operation B in time, then operation A must take effect before operation B, and serializability doesn’t require that, but let’s put that aside for now).

This counterexample above is a linearizable history: we can order the operations such that they are consistent with the contracts of x and y. Here’s an example of a valid history, which is called a linearization:

x = []
y = []
x.read() → []
x.append(2)
y.read() → []
y.append(4)
y.append(5)
y.read() → [4, 5]

Note how the operations between the two transactions are interleaved. This is forbidden by transactional isolation, but the definition of linearizability does not take into account transactions.

This example demonstrates how it’s possible to have histories that are linearizable but not serializable.

We say that lineariazibility is a local property where serializability is not: by the definition of linearizability, we can identify if a history is linearizable by looking at the histories of the individual objects (x, y). However, we can’t do that for serializability.

This mess we’re in

Most real software systems feel “messy” to the engineers who work on them. I’ve found that software engineers tend to fall into one of two camps on the nature of this messiness.

Camp 1: Problems with the design

One camp believes that the messiness is primarily related to sub-optimal design decisions. These design decisions might simply be poor decisions, or they might be because we aren’t able to spend enough time getting the design right.

My favorite example of this school of thought can be found in the text of Leslie Lamport’s talk entitled The Future of Computing: Logic or Biology:

The best way to cope with complexity is to avoid it. Programs that do a lot for us are inherently complex. But instead of surrendering to complexity, we must control it. We must keep our systems as simple as possible. Having two different kinds of drag-and-drop behavior is more complicated than having just one. And that is needless complexity.

We must keep our systems simple enough so we can understand them.
And the fundamental tool for doing this is the logic of mathematics.

Leslie Lamport, The Future of Computing: Logic or Biology

Camp 2: The world is messy

Engineers in the second camp believe that reality is just inherently messy, and that mess ends up being reflected in software systems that have to model the world. Rich Hickey describes this in what he calls “situated programs” (emphasis mine)

And they [situated programs] deal with real-world irregularity. This is the other thing I think that’s super-critical, you know, in this situated programming world. It’s never as elegant as you think, the real-world.

And I talked about that scheduling problem of, you know, those linear times, somebody who listens all day, and the somebody who just listens while they’re driving in the morning and the afternoon. Eight hours apart there’s one set of people and, then an hour later there’s another set of people, another set. You know, you have to think about all that time. You come up with this elegant notion of multi-dimensional time and you’d be like, “oh, I’m totally good…except on Tuesday”. Why? Well, in the U.S. on certain kinds of genres of radio, there’s a thing called “two for Tuesday”. Right? So you built this scheduling system, and the main purpose of the system is to never play the same song twice in a row, or even pretty near when you played it last. And not even play the same artist near when you played the artist, or else somebody’s going to say, “all you do is play Elton John, I hate this station”.

But on Tuesday, it’s a gimmick. “Two for Tuesday” means, every spot where we play a song, we’re going to play two songs by that artist. Violating every precious elegant rule you put in the system. And I’ve never had a real-world system that didn’t have these kinds of irregularities. And where they weren’t important.

Rich Hickey, Effective Programs: 10 Years of Clojure

It will come as no surprise to readers of this blog that I fall into the second camp. I do think that sub-optimal design decisions also contribute to messiness in software systems, but I think those are inevitable because unexpected changes and time pressures are inescapable. This is the mess we’re in.

Asking the right “why” questions

In the [Cognitive Systems Engineering] terminology, it is more important to understand what a joint cognitive system (JCS) does and why it does it, than to explain how it does it. [emphasis in the original]

Erik Hollnagel & David D. Woods, Joint Cognitive Systems: Foundations of Cognitive Systems Engineering, p22

In my previous post, I linked to a famous essay by John Allspaw: The Infinite Hows (or, the Dangers Of The Five Whys). The main thrust of Allspaw’s essay can be summed up in this five word excerpt:

“Why?” is the wrong question.

As illustrated by the quote from Hollnagel & Woods at the top of this post, it turns out that cognitive systems engineering (CSE) is very big on answering “why” questions. Allspaw’s perspective on incident analysis is deeply influenced by research from cognitive systems engineering. So what’s going on here?

It turns out that the CSE folks are asking different kinds of “why” questions than the root cause analysis (RCA) folks. The RCA folks ask why did this incident happen? The CSE folks ask why did the system adapt the sorts of behaviors that contributed to the incident?

Those questions may sound similar, but they start from opposite assumptions. The RCA folks start with the assumption that there’s some sort of flaw in the system, a vulnerability that was previously unknown, and then base their analysis on identifying what that vulnerability was.

The CSE folks, on the other hand, start with the assumption that behaviors exhibited by the system developed through adaptation to existing constraints. The “why” question here is “why is this behavior adaptive? What purpose does it serve in the system?” Then they base the analysis on identifying attributes of the system such as constraints and goal conflicts that would explain why this behavior is adaptive.

This is one of the reasons why the CSE folks are so interested in incidents to begin with: because it can expose these kinds of constraints and conflicts that are part of the context of a system. It’s similar to how psychologists use optical illusions to study the heuristics that the human visual system employs: you look at the circumstances under which a system fails to get some insight into how it normally functions as well as it does.

“Why” questions can be useful! But you’ve got to ask the right ones.

The inevitable double bind

Here are three recent COVID-19 news stories:

The first two stories are about large organizations (the FDA, large banks) moving too slowly in order to comply with regulations. The third story is about the risks of the FDA moving too quickly.

Whenever an agent is under pressure to simultaneously act quickly and carefully, they are faced with a double-bind. If they proceed quickly and something goes wrong, they will be faulted for not being careful enough. If they proceed carefully and something goes wrong, they will be faulted for not moving quickly enough.

In hindsight, it’s easy to identify who wasn’t quick enough and who wasn’t careful enough. But if you want to understand how agents make these decisions, you need to understand the multiple pressures that agents experience, because they are trading these off. You also need to understand what information they had available at the time, as well as their previous experiences. I thought this observation of the behavior of the banks was particularly insightful.

But it does tell a more general story about the big banks, that they have invested so much in at least the formalities of compliance that they have become worse than small banks at making loans to new customers.

Matt Levine

Reactions to previous incidents have unintended consequences to the future. The conclusion to draw here isn’t that “the banks are now overregulated”. Rather, it’s that double binds are unavoidable: we can’t eliminate them by adding or removing regulations. There’s no perfect knob setting where they don’t happen anymore.

Once we accept that double binds are inevitable, we can shift of our focus away from just adjusting the knob and towards work that will prepare agents to make more effective decisions when they inevitably encounter the next double bind.

Rebrand: Surfing Complexity

You can’t stop the waves, but you can learn to surf.

Jon Kabat-Zinn

When I started this blog, my primary interests were around software engineering and software engineering research, and that’s what I mostly wrote about. Over time, I became more interested in complex systems that include software, sometimes referred to as socio-technical systems. That attracted me initially to chaos engineering, and, more recently, to learning from incidents and resilience engineering.

To reflect the more recent focus on complex systems, I decided to rebrand this blog Surfing Complexity. The term has two inspirations: the quote from Jon Kabat-Zinn at the top of this post, and the book title Surfing Uncertainty by Andy Clark. I also gave the blog a new domain name: surfingcomplexity.blog.

In my experience, software engineers recognize the challenge of complexity, but their primary strategy for addressing complexity is by trying to reduce it (and, when they don’t have the resources to do so, complaining about it). By contrast, the resilience engineering community recognizes that complexity is inevitable in the adaptive universe, and seek to understand what we can do to navigate complexity more effectively.

While I think that we should strive to reduce complexity where possible, I also believe that most strategies for increasing the robustness or safety in a system lead will ultimately lead to an increase in complexity. As an example, consider an anti-lock braking system in a modern car. It’s a safety feature, but it clearly increases the complexity of the automobile.

I really like Kabat-Zinn’s surfing metaphor, because it captures the idea that complexity is inevitable: getting rid of it isn’t an option. However, we can get better at dealing with it.

Rehabilitating “you can’t manage what you can’t measure”

There’s a management aphorism that goes “you can’t manage what you can’t measure”. It is … controversial. W. Edwards Deming, for example, famously derided it. But I think there are two ways to interpret this quote, and they have very different takeaways.

One way to read this is to treat the word measure as a synonym for quantify. When John Allspaw rails against aggregate metrics like mean time to resolve (MTTR), he is siding with Deming in criticizing the idea of relying solely on aggregate, quantitative metrics for gaining insight into your system.

But there’s another way to interpret this aphorism, and it depends on an alternate interpretation of the word measure. I think that observing any kind of signal is a type of measurement. For example, if you’re having a conversation with someone, and you notice something in their tone of voice or their facial expression, then you’re engaged in the process of measurement. It’s not quantitative, but it represents information you’ve collected that you didn’t have before.

By generalizing the concept of measurement, I would recast this aphorism as: what you aren’t aware of, you can’t take into account.

This may sound like a banal observation, but the subtext here is “… and there’s a lot you aren’t taking into account.” A lot of things that are happening in your organization, your system, are largely invisible. And those things, that work, is keeping things up and running.

The concept that there’s invisible work happening that’s creating your availability is at the heart of the learning from incidents in software movement. And it isn’t obvious, even though we all experience it directly.

This invisible work is valuable in the sense that it’s contributing to keeping your system healthy. But the fact that it’s invisible is dangerous because it can’t be taken into account when decisions are made that change the system. For example, I’ve seen technological changes that have made it more difficult for the incident management team to diagnose what’s happening in the system. The teams who introduced those changes were not aware of how the folks on the incident management team were doing diagnostic work.

In particular, one of the dangers of an action-item-oriented approach to incident reviews is that you may end up introducing a change to the system that disrupts this invisible work.

Take the time to learn about the work that’s happening that nobody else sees. Because if you don’t see it, you may end up breaking it.