Negotiability is non-negotiable

I’ve just started reading Trust in Numbers: The Pursuit of Objectivity in Science and Public Life by the historian of science Theodore Porter, and so far it’s fantastic. The first chapter discusses how, in the days before the metric system, even units of measure were negotiable. Porter gives two examples. One example comes from Poland, where the size of a unit of land would sometimes vary based on the quality of the soil, to make adjustments to equalize the amount of productive land, rather than simply the area of the land.

The other example Porter gives is about the reference vessel that towns would use to as their local definition of a bushel. You might think that defining a specific vessel as the bushel would give a fixed definition, but there was still flexibility. The amount of say, grain, or oat, that could be poured into the vessel could vary, depending on how it was poured (e.g., whether the material was “flattened” or “heaped”, the height it was poured from). This allowed people to make adjustments on the actual volume that constituted a bushel based on factors such as quality.

We humans have to build systems in order to scale up certain kinds of work: we couldn’t have large-scale organizations like governments and corporations without the technologies of bureaucracies and other forms of standardization. This is the sort of thing that James Scott calls legibility. But these formal systems demand fixed rules, which can never fully accommodate the messiness that comes with operating in the real world. And so, the people at the sharp end, the folks on the ground who are doing the actual work of making the system go, have to deal with this messiness that the system’s designers did not account for.

For these people to be able to do their work, there needs to be some give in the system. The people need to be able to exercise judgment, some ability to negotiate the boundaries of the rules of the system. Every human system is like this: the actors must have some amount of flexibility, otherwise the rules of the system will prevent people from being able to achieve the goals of the system. In other words, negotiability in a system is non-negotiable.

You should’ve known how to build a non-causal system

Reporting an outcome’s occurrence consistently increases its perceived likelihood and alters the judged relevance of data describing the situation preceding the event.

Baruch Fischhoff, Hindsight ≠ foresight: the effect of outcome knowledge on judgment under uncertainty, Journal of Experimental Psychology: Human Perception and
Performance 1975, Volume 1, pages 288–299

In my last blog post, I wrote about how computer scientists use execution histories to reason about consistency properties of distributed data structures. One class of consistency properties is known as causal consistency. In my post, I used an example that shows a violation of causal consistency, a property called writes follows reads.

Here’s the example I used, with timestamps added (note: this is a single-process example, there’s no multi-process concurrency here).

t=0: q.get() -> []
t=1: q.get() -> ["A: Hello"]
t=2: q.add("A: Hello")

Now, imagine this conversation between two engineers who are discussing this queue execution history.


A: “There’s something wrong with the queue behavior.”

B: “What do you mean?”

A: “Well, the queue was clearly empty at t=0, and then it had a value at t=1, even though there was no write.”

B: “Yes, there was, at t=2. That write is the reason why the queue read [“A: Hello”] at t=1.”


We would not accept that answer given by B, that the read seen at t=1 was due to the write that happened at t=2. The reason we would reject it is that this violates are notion of causality: the current output of a system cannot depend on its future inputs!

It’s not that we are opposed to the idea of causal systems in principle. We’d love to be able to build systems that can see into the future! It’s that such systems are not physically realizable, even though we can build mathematical models of their behavior. If you build a system whose execution histories violate causal consistency, you will be admonished by distributed systems engineers: something has gone wrong somewhere, because that behavior should not be possible. (In practice, what’s happened is that events have gotten reordered, rather than an engineer having accidentally built a system that can see into the future).

In the wake of an incident, we often experience the exact opposite problem: being admonished for failing to be part of a non-causal system. What happens is that someone will make an observation that the failure mode was actually foreseeable, and that engineers erred by not being able to anticipate it. Invariably, the phrase “should have known” will be used to describe this lack of foresight.

The problem is, this type of observation is only possible with knowledge of how things actually turned out. They believe that the outcome was foreseeable because they know that it happened. When you hear someone say “they should have known that…”, what that person is in fact saying is “the system’s behavior in the past failed to take into account future events”.

This sort of observation, while absurd, is seductive. And it happens often enough that researchers have a name for it: hindsight bias, or alternately, creeping determinism. The paper by the engineering researcher Baruch Fischhoff quoted at the top of this post documents a controlled experiment that demonstrates the phenomenon. However, you don’t need to look at the research literature to see this effect. Sadly, it’s all around us.

So, whenever you hear “X should have”, that should raise a red flag, because it’s an implicit claim that it’s possible to build non-causal systems. The distributed systems folks are right to insist on causal consistency. To berate someone for not building an impossible system is pure folly.