Making peace with the imperfect nature of mental models

We all carry with us in our heads models about how the world works, which we colloquially refer to as mental models. These models are always incomplete, often stale, and sometimes they’re just plain wrong.

For those of us doing operations work, our mental models include our understanding of how the different parts of the system work. Incorrect mental models are always a factor in incidents: incidents are always surprises, and surprises are always discrepancies between our mental models and reality.

There are two things that are important to remember. First, our mental models are usually good enough for us to do our operations work effectively. Our human brains are actually surprisingly good at enabling us to do this stuff. Second, while a stale mental model is a serious risk, none of us have the time to constantly verify that all of our mental models are up to date. This is the equivalent of popping up an “are you sure?” modal dialog box before taking any action. (“Are you sure that pipeline that always deploys to the test environment still deploys to test first?”)

Instead, because our time and attention is limited, we have to get good at identifying cues to indicate that our models have gotten stale or are incorrect. But, since we won’t always get these cues, it’s inevitable that our mental models will go out of date. But that’s just an inevitable part of the job when you work in a dynamic environment. And we all work in dynamic environments.

5 thoughts on “Making peace with the imperfect nature of mental models

  1. Nice post. I think the case of a stale mental model is the easiest to recover from, provided you have a chronological record of changes to the system (git commit logs, release notes, work logs, ticket history, etc.). The effort needed to “fast forward” your mental model will be proportional to the scale of changes since the last snapshot in your mental model, and you could even do it piecemeal (or via bisect). The potential for surprises is low, when you’re starting with an outdated, but correct mental model.

    The cases where your mental model is incomplete or wrong, though, are much more unpredictable to deal with. It can be hard to understand what’s missing or incorrect in your current mental model. And you might feel the need to recursively explore and/or revise your understanding of the system, which could go on for a long time…

    1. This reminds me of another source of discrepancies: what we believe/say vs what we actually do. While logs and detailed artifacts might help us sort things out, our interpretation of these artifacts is biased by pre-conceived notions and tuned perceptions. We may miss salient clues because we don’t expect them. That is why having some way of getting outside of our pre-conceived notions is important… so that we know how to look at things in a fresh light. Wondering if incident response folks have some heuristics for “jarring” their preconceived models.

      1. The safety science folks call this problem “fixation”, where operators get stuck in an incorrect theory about what’s going on. I’m not aware of heuristics that work in practice for dealing with fixation, but I wold love to hear about them!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s