Imagine there’s no human error…

When an incident happens, one of the causes is invariably identified as human error: somebody along the way made a mistake, did something they shouldn’t have done. For example: that engineer shouldn’t have done that clearly risky deployment and then walked away without babysitting it. Labeling an action as human error is an unfortunately effective way at ending an investigation (root cause: human error).

Some folks try to make progress on the current status quo by arguing that, since human error is inevitable (people make mistakes!), it should be the beginning of the investigation, rather than the end. I respect this approach, but I’m going to take a more extreme view here: we can gain insight into how incidents happen, even those that involve operator actions as contributing factors, without reference to human error at all.

Since we human beings are physical beings, you can think of us as machines. Specifically, we are machines that make decisions and take action based on those decisions. Now, imagine that every decision we make involves our brain trying to maximize a function: when provided with a set of options, it picks the one that has the largest value. Let’s call this function g, for goodness.

Pushing button A has a higher value of g than pushing button B.

(The neuroscientist Karl Friston has actually proposes something similar as a theory: organisms make decisions to minimize model surprise, a construct that Friston calls free energy).

In this (admittedly simplistic) model of human behavior, all decision making is based on an evaluation of g. Each person’s g will vary based on their personal history and based on their current context: what they currently see and hear, as well as other factors such as time pressure and conflicting goals. “History” here is very broad, as g will vary based not only on what you’ve learned in the past, but also on physiological factors like how much sleep you had last night and what you ate for breakfast.

Under this paradigm, if one of the contributing factors in an incident was the user pushing “A” instead of “B”, we ask “how did the operator’s g function score a higher value for pushing A over B”? There’s no concept of “error” in this model. Instead, we can explore the individual’s context and history to get a better understanding of how their g function valued A over B. We accomplish this by talking to them.

I think the model above is much more fruitful than the one where we identify errors or mistakes. In this model, we have to confront the context and the history that a person was exposed to, because those are the factors that determine how decisions get made.

The idea of human error is a hard thing to shake. But I think we’d be better off if we abandoned it entirely.

Some additional reading on the idea of human error:

8 thoughts on “Imagine there’s no human error…

  1. I recall a time tracking software that a company required all its employees to use to book time to projects and after a few months of getting used to it performance topped out at an accuracy of 93%. There were three key problems in my opinion. The first was that I couldn’t just copy the week before, while my project might change every now and then for the most part this week was the same as last week until something changed. Thus I wanted to just replicate my week. The second was that finding projects was really bad there wasn’t enough information presented about them to know for sure it was the right one. The final problem was it wasn’t clear what constituted a weeks worth of hours under the contract you were under, I think it differed from the contracted hours so we had to “lie” to the system to complete a full week.

    Human resources sent increasingly agitated emails and threats to employees on all the errors people were making even starting various sanctions against those that made mistakes. Yet alongside this numerous people had made the same observations about why the software was making the right decisions difficult. After about a year an update arrived from the manufacturers taking into account the top feedback and accuracy shot up to 99%. A few simple UI changes and features for storing projects to make the process easier dramatically improved accuracy.

    I also used to work in Aerospace and we took errors as a failure of the system and by taking on that mindset and with each error finding all other errors of its “type” we would outperform typical software bug rates to about 1/100th. Then the testing process removed the rest also using a much more extensive system and multiple independent teams. When you adopt this mindset of errors being about how a human machine made the wrong decision you can make dramatic progress on low hanging fruit to improve accuracy in my experience.

  2. I ended up providing preschool for my twins at home (in those preCOVID) years, and of the approaches I scanned, something like this ended up being what I used. I provided a rich environment (materials, time) for kids to use + I observed how they used it = I made adjustments accordingly, NOT so much to the kids but to the environment.

    Maybe it was because I’m lazy (or was outnumbered by creatures I couldn’t reliably reason with), but changing stuff is WAY faster and easier than changing people! And I could repeat the Observation and Adjustment step as necessary as they grew and developed. I’m biased, I think they’re turning out well despite my lack of early childhood education training. 🙂

  3. James Reason write extensively about human error and I tend to agree that it is a thing and there are justifications a about why that happend that way.

    But there is a complementary view in my opinion that the system could prevent errors from happeming, and yeah we should focus on that but, somethimes, a quick winning action would be some evolution in the process or in the rules to prevent that from happening.

  4. Yes, “Human Error”, by Reason, is a great little book, as is Rasmussen et al’s “Cognitive System Engineering” (a worthwhile niche to investigate).
    We are loathe to accept that we aren’t as infallible as we think we are, nor all-seeing.

    Software is really bad for encouraging a simplistic boolean view of the world, one bit at a time. “Hello world” shouldn’t be a statement but an engagement 😉

    I blame the moron in the mirror.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s