
“Flipping the bozo bit” is an expression from the software world. Think about a time when you reached a point where you simply stopped respecting the opinion of a particular person, most likely a co-worker. From that point on, you disregarded what they said. This is what flipping the bozo bit is. This person isn’t worth listening to, they’re a bozo.
There’s a related phenomenon, where we hear an anecdote about some bad outcome that happened to someone else, and our conclusion is that this outcome occurred because, well, that person is a bozo. I’m writing, of course, about incidents. You’ve seen this happen, right? An incident happens, the details of the incident get passed around, and somebody makes a comment like, “how could they have [not] done X?” The subtext is “what a bunch of bozos!”
This is on my mind because of the latest AI-related incident that befell PocketOS. You can read about it in the Twitter post written by the PocketOS founder, Jer Crane. The post is titled An AI Agent Just Destroyed Our Production Data. It Confessed in Writing. Unsurprisingly, this post got a lot of online attention. I saw a lot of “wow, was this guy ever a bozo” reactions to this story. I want to talk about why this reaction is counter-productive. I also want to call out the technical term for this phenomenon, which is a cousin of flipping the bozo bit. It’s called distancing through differencing.
The term distancing through differencing was introduced by the American resilience engineering researchers Richard Cook and David Woods in their 2006 paper: Distancing Through Differencing: An Obstacle to Organizational Learning Following Accidents. Technically, it’s a book chapter, from Resilience Engineering: Concepts and Precepts. It’s very readable, and I recommend it. All of the quoted text below is from that paper.
By focusing on the differences, they see no lessons for their own operation and practices.
When people hear about an incident and respond by concluding “an incident like that would never happen to us; that happened to those workers over there because they are clearly not as careful as we are, that’s distancing through differencing in action.
Overall they decided the incident “couldn’t happen here”.
The Cook and Woods paper illustrates the phenomenon with a case study of a chemical fire that broke out at an American manufacturing plant. There had been a similar fire that had occurred previously at the same company, at an overseas plant. The American employees knew about the previous fire, but they had concluded that there was nothing to learn from that other fire, as that sort of accident couldn’t happen to them in the U.S. After all, those overseas workers were less skilled, less motivated, and less careful. In short, those overseas workers were perceived as different.
Ironically, after the chemical fire at the Ameircan plant, other workers at that very same plant also exhibited distancing through differencing.
Workers in the same plant, working in the same area in which the fire occurred but on a different shift, attributed the fire to lower skills of the workers on the other shift.
Cook and Woods note that our tendency to focus on differences between us and them when the incident happens to them leads us to miss aspects of the system that we actually have in common with them. By focusing on the differences, we miss the opportunity to learn from their experiences, because it seduces us into believing there’s nothing for us to learn here.
do not discard other events because they appear on the surface to be dissimilar. At some level of analysis, all events are unique; while at other levels of analysis, they reveal common patterns.
Now let’s circle back to the PocketOS AI-related incident. If we come to the conclusion that PocketOS employees were simply using AI irresponsibly, and that we are more responsible than that, we learn nothing from the experience. I was heartened to see that Railway, the vendor used by PocketOS that exposed the delete API, has made changes to the overall system to improve safety; see their post: Your AI wants to nuke your database. Guardrails fix that.
Stepping back, this isn’t the last AI-related incident we’re going to see in our industry, not by a long shot. The next time you read one of those, if your reaction is “they should have known not to do X”, then you’ve fallen into the distancing through differencing trap.
(As an aside, “they should have known…” is an incoherent sentence. It’s one thing if somebody deliberately took on excessive risk. But it’s another thing if they unknowingly took on excessive risk. How can you blame a person for not knowing something?)
When this process of learning moved past the obstacle of distancing through differencing in this case, the organizational response changed.
After all, there but for the grace of God go we all.