The fog of war in Uvalde

The interim report on the shooting in Uvalde, Texas faults the responders with treating the incident as “barricaded subject” scenario, where they should have treated it as an “active shooter” scenario.

Here are some excerpts from the report (emphasis mine)

Instead of continuing to act as if they were addressing a barricaded subject scenario in which responders had time on their side, they should have reassessed the scenario as one involving an active shooter. Correcting this error should have sparked greater urgency to immediately breach the classroom by any possible means, to subdue the attacker, and to deliver immediate aid to surviving victims. Recognition of an active shooter scenario also should have prompted responders to prioritize the rescue of innocent victims over the precious time wasted in a search for door keys and shields to enhance the safety of law enforcement responders.

Interim report, p8

An offsite overall incident commander who properly categorized the crisis as an active shooter scenario should have urged using other secondary means to breach the classroom, such as using a sledgehammer as suggested in active shooter training or entering through the exterior windows.

Interim report, p8

Although the encounter had begun as an “active shooter” scenario, Chief Arredondo testified that he immediately began to think of the attacker as being “cornered” and the situation as being one of a “barricaded subject” where his priority was to protect people in the other classrooms from being victimized by the attacker

Interim report, p52

Here’s how Chief Pete Arredondo described his mental model of the situation in the moment:

We have this guy cornered. We have a group of officers on … the north side, a group of officers on the south side, and we have children now that we know in these other rooms. My thought was: We’re a barrier; get these kids out — not the hallway, because the bullets are flying through the walls, but get them out the wall – out the windows, because I know, on the outside, it’s brick.

***
[T]o me … once he’s … in a room, you know, to me, he’s barricaded in a room. Our thought was: “If he comes out, you know, you eliminate the threat,” correct? And just the thought of other children being in other classrooms, my thought was: “We can’t let him come back out. If he comes back out, we take him out, or we eliminate the threat. Let’s get these children out.”

It goes back to the categorizing. … I couldn’t tell you when — if there was any different kind of categorizing. I just knew that he was cornered. And my thought was: “ … We’re a wall for these kids.” That’s the way I looked at it. “We’re a wall for these kids. We’re not going to let him get to these kids in these classrooms” where … we saw the children.

[W]hen there’s a threat … you have to visibly be able to see the threat. You have to have a target before you engage your firearm. That was just something that’s gone through my head a million times … .[G]etting fired at through the wall … coming from a blind wall, I had no idea what was on the other side of that wall. But … you eliminate the threat when you could see it. … I never saw a threat. I never got to … physically see the threat or the shooter.

Interim report, pp 52 –53

The report goes on to say:

Chief Arredondo’s testimony about his immediate perception of the circumstances is consistent with that of the other responders to the extent they uniformly testified that they were unaware of what was taking place behind the doors of Rooms 111 and 112. They obviously were in a school building, during school hours, and the attacker had fired a large number of rounds from inside those rooms. But the responders testified that they heard no screams or cries from within the rooms, and they did not know whether anyone was trapped inside needing rescue or medical attention. Not seeing any injured students during their initial foray into the hallway, Sgt. Coronado testified that he thought that it was probably a “bailout” situation.

Chief Arredondo and other officers contended they were justified in treating the attacker as a “barricaded subject” rather than an “active shooter” because of lack of visual confirmation of injuries or other information.

Interim report, p53

(Aside: A “bailout” situation refers to human traffickers who try to outrun the police. They commonly crash their vehicles and then flee. These bailout situations were so common in Uvalde that they led to alert fatigue(!). See p6 of the report for more details).


Of course, it’s impossible to know the true state of mind of the officers at the time. And, as the report notes, video camera evidence suggests that officers eventually believed there were people who had been injured by the shooter:

For example, later in the incident, Sgt. Coronado’s body-worn camera footage recorded that somebody asked, at 12:34 p.m., “we don’t know if he has anyone in the room with him, do we?” Chief Arredondo responded, “I think he does. There’s probably some casualties.” Sgt. Coronado agreed, saying “yeah, he does … casualties.” Then at 12:41 p.m.: “Just so you understand, we think there are some injuries in there.”

Interim report, p54, footnote 164

But even the report suggests that the issue was around fixation, as opposed to the officer lying about what he believed in the moment.

This “barricaded subject” approach never changed over the course of the incident despite evidence that Chief Arredondo’s perspective evolved to a later understanding that fatalities and injuries within the classrooms were a very strong probability.

Interim report, pp 53–54

My claim here is that we should assume the officer is telling the truth and was acting reasonably if we want to understand how these types of failure modes can happen.

Instead of assuming that Chief Arredondo made a mistake, if we assume he came to a reasonable conclusion in assuming the shooter was a “barricaded subject”, then we can better appreciate the ambiguous nature of incidents in the moment. In order to understand the challenges that people like Chief Arredondo faced, we need to put ourselves in his place, and imagine what our understanding would be like if we only saw the signals that he did.

This isn’t the last time a responder is going to reach the wrong conclusion based on partial information, and then get fixated on it. If we simply label Chief Arredondo as “acting unreasonably” or “being a coward”, then we might feel better when he gets fired, but we won’t get better at these sorts of failure modes. We must assume that a person can act reasonably and still come to the wrong conclusion in order to make progress.

What’s allowed to count as a cause: ALERRT edition

The Advanced Law Enforcement Rapid Response Training (ALERRT) Center, based at Texas State University, trains law enforcement officers on how to deal with active shooter incidents. After the shooting at Uvalde, ALERRT produced an after-action report titled Robb Elementary School Attack Response Assessment and Recommendations.

The “Tactical Assessment” section of the report criticizes the action of the responding officers. It’s too long to excerpt in this post, but here are some examples:

A reasonable officer would have considered this an active situation and devised a plan to address the suspect. Even if the suspect was no longer firing his weapon, his presence and prior actions were preventing officers from accessing victims in the classroom to render medical aid (ALERRT & FBI, 2020, p. 2-17).

ALERRT report, p17

In a hostage/barricade, officers are taught to utilize the 5 Cs (Contain, Control, Communicate, Call SWAT, Create a Plan; ALERRT & FBI, 2020, pp. 2-17 to 2-19). In this instance, the suspect was contained in rooms 111 and 112. The officers established control in that they slowed down the assault. However, the officers did not establish communication with the suspect. The UCISD PD Chief did request SWAT/tactical teams. SWAT was called, but it takes time for the operators to arrive on scene. In the meantime, it is imperative that an immediate action plan is created. This plan is used if active violence occurs. It appears that the officers did not create an immediate action plan.

ALERRT report, p17

(Note: per the interim report, the officers did try to establish communication with the suspect, but the ALERRT authors weren’t aware of this at the time).

At 11:40:58, the suspect fired one shot. At 11:44:00, the suspect fired another shot, and finally, at 12:21:08, the suspect fired 4 more shots. During each of these instances, the situation had gone active, and the immediate action plan should have been triggered because it was reasonable to believe that people were being killed.

ALERRT report, p18

Additionally, we have noted in this report that it does not appear that effective incident command was established during this event. The lack of effective command likely impaired both the Stop the Killing and Stop the Dying parts of the response.

ALERT report, p19

The interim report also covers some of this territory in the subsection titled “ALERRT Standard for Active Shooter Training”, which starts on p17.

What struck me after reading the ALERRT report is that there is no mention of the fact that several of the responding police officers had received ALERRT training, including the chief of the Uvalde school district police, Pete Arredondo. From the interim report:

Before joining the Uvalde CISD Police Department, Chief Arredondo received active shooter training from the ALERRT Center, which the FBI has recognized as “the National Standard in Active Shooter Response Training.” Every school district peace officer in Texas must be trained on how to respond in active shooter scenarios. Not all of them get ALERRT training, but Chief Arredondo and other responders at Robb Elementary did.

Interim report, pp 17–18

The ALERRT report discusses how the actions of the officers is contrary to ALERRT training, and that is one potential explanation for why things went badly. But another potential explanation is that the ALERRT training wasn’t good enough to prepare the officers to deal with this situation. For example, perhaps the training doesn’t go into enough detail about the danger of fixation, where Chief Arredondo focused on trying to get a key for the door, when it wasn’t even clear whether the door was locked or not. (Does ALERRT train peace officers to diagnose fixation in other responders?)

The interim report gestures in the direction of ALERRT training being inadequate when it comes to checking the locks, although not in about the more general problem of fixation.

ALERRT has noted the failure to check the lock in its criticisms. See ALERRT, Robb Elementary School Attack Response Assessment and Recommendations at 18-19 (July 6, 2022). A representative of ALERRT testified before the Committee that the “first rule of breaching” is to check the lock. See Testimony of John Curnutt, ALERRT (July 11, 2022). Unfortunately, ALERRT apparently has neglected to include that “first rule of breaching” in its active- shooter training materials, which includes modules entitled “Closed and Locked Interior Doors” and “Entering Locked Buildings Quickly, Discreetly, and Safely.” See Federal Bureau of Investigation & ALERRT, Active Shooter Response – Level 1, at STU 3-8 – 3-10, 4-20 – 4-25.

Interim report, p64, footnote 206

Now, these criticisms are hindsight-laden, and my goal here isn’t to criticize ALERRT’s training: this isn’t my domain, and I don’t pretend to know how to train officers to deal with active shooter scenarios. Rather, my point is that the folks writing the ALERRT report were never going to consider that their own training is inadequate. After all, they’re the experts!

ALERRT was recognized as the national standard in active shooter response training by the FBI in 2013. ALERRT’s excellence in training was recognized in 2016 with a Congressional Achievement Award.

More than 200,000 state, local, and tribal first responders (over 140,000 law enforcement) from all 50 states, the District of Columbia, and U.S. territories have received ALERRT training over the last 20 years.

ALERRT training is research based. The ALERRT research team not only evaluates the efficacy of specific response tactics (Blair & Martaindale, 2014; Blair & Martaindale, 2017; Blair, Martaindale, & Nichols, 2014; Blair, Martaindale, & Sandel, 2019; Blair, Nichols, Burns, & Curnutt, 2013;) but also has a long, established history of evaluating the outcomes of active shooter events to inform training (Martaindale, 2015; Martaindale & Blair, 2017; Martaindale, Sandel, & Blair, 2017). Specifically, ALERRT has utilized case studies of active shooter events to develop improved curriculum to better prepare first responders to respond to similar situations (Martaindale & Blair, 2019).

For these reasons, ALERRT staff will draw on 20 years of experience training first responders and researching best practices to fulfill the Texas DPS request and objectively evaluate the law enforcement response to the May 24, 2022, attack at Robb Elementary School.

ALERRT report, p1

I think it’s literally inconceivable for the ALERRT staff to consider the inadequacy of their own training curriculum as being a contributor to the incident. It’s a great example of something that isn’t allowed to count as a cause.

I’ll end this blog post with some shade that the interim report threw on the ALERRT report.

The recent ALERRT report states that “[o]nce the officers retreated, they should have quickly made a plan to stop the attacker and gain access to the wounded,” noting “[t]here were several possible plans that could have been implemented.” “Perhaps the simplest plan,” according to ALERRT, “would have been to push the team back down the hallway and attempt to control the classrooms from the windows in the doors.” The report explains the purported simplicity of the plan by noting: “Any officer wearing rifle-rated body armor (e.g., plates) would have assumed the lead as they had an additional level of protection.” ALERRT, Robb Elementary School Attack Response Assessment and Recommendations (July 6, 2022). A problem with ALERRT’s depiction of its “simplest plan” is that no officer present was wearing “rifle-rated body armor (e.g., plates).” The Committee agrees the officers should have attempted to breach the classrooms even without armor, but it is inflammatory and misleading to release to the public a report describing “plans that could have been implemented” that assume the presence of protective equipment that the officers did not have.

Interim Report, pp51–52, footnote 158

Uvalde

Last week, the Investigative Committee on the Robb Elementary shooting (in Uvalde, Texas) released an interim report with their findings. I recommend reading it if you’re interested in incidents, especially section 5, “May 24 Incident & Law Enforcement Response“, which goes into detail on how the police responded.

I was pleasantly surprised to see terms like contributing factors and systemic failures in the report, and not a single reference to root cause. On the other hand, there’s way too much counterfactual reasoning in the report in my taste: there’s an entire subsection with the title “What Didn’t Happen in Those 73 Minutes?” It doesn’t get more counterfactual-y than that. There’s also normative language like egregious poor decision making. It’s disappointing, but not surprising, to see this type of language, given the nature of the incident.

Reading the report, I found there was too much I wanted to comment on to fit into one post, and so I’m going to try and write a series of posts instead. I’ve also created a GitHub repo with pointers to various artifacts related to the shooting (reports, images, videos).

Incident response is a team sport

Once upon a time, whenever I was involved in responding to an incident, and a teammate ended up diagnosing the failure mode, I would kick myself afterwards. How come I couldn’t figure out what was wrong? Why hadn’t I thought to do what they had done?

However, after enough exposure to the cognitive systems engineering literature, something finally clicked in my mind. When a group of people respond to an incident, it’s never the responsibility of a single individual to remediate. It can’t be, because we each know our own corners of the system better than our teammates. Instead, it is the responsibility of the group of incident responders as a whole to resolve the incident.

The group of incident responders, that ad-hoc team that forms in the moment, is what’s referred to as a joint cognitive system. It’s the responsibility of the individual responders to coordinate effectively so that the cognitive system can solve the problem. Often that involves dynamically distributing the workload so that individuals can focus on specific tasks.

Resolving incidents is a team effort. Go team!

You’re just going to sit there???

Here’s a little story about something that happened last year.

A paging alert fires for a service that a sibling team manages. I’m the support on-call, meaning that I answered support questions about the delivery engineering tooling. That means my only role here is to communicate with internal users about an ongoing issue. Since I don’t know this service at all, there isn’t much else for me to do: I’m just a bystander, watching the Slack messages from the sidelines.

The operations on-call he acknowledges the page and starts digging to figure out what’s gone wrong. As he’s investigating, he’s providing updates about his progress by posting Slack messages to the on-call channel. At one point, he types this message:

Anyway… we’re dead in the water until this figures itself out.

I’m… flabbergasted. He’s just going to sit there and hope that the system becomes healthy again on its own? He’s not even going to try and remediate? Much to my relief, after a few minutes, the service recovered.

Talking to him the next day, I discovered that he had taken a remediation action: he failed over a supporting service from the primary to the secondary. His comment was referring to the fact that the service was going to be down until the failover completed. Once the secondary became the new primary, things went back to normal.

When I looked back at the Slack messages, I noticed that he had written messages to communicate that he was failing over the primary. But he had also mentioned that his initial attempt at failover didn’t work, as the operational UX was misleading. What happened was that I had misinterpreted the Slack message. I thought his attempt to fail over had simply failed entirely, and he was out of ideas.

Communicating effectively over Slack during a high-tempo event like an incident is challenging. It can be especially difficult if you don’t have a prior working relationship with the people in the ad-hoc incident response team, which can happen when an incident spans multiple teams. Getting better at communicating during an incident is a skill, both for individuals and organizations as a whole. It’s one I think we don’t pay enough attention to.

Imagine there’s no human error…

When an incident happens, one of the causes is invariably identified as human error: somebody along the way made a mistake, did something they shouldn’t have done. For example: that engineer shouldn’t have done that clearly risky deployment and then walked away without babysitting it. Labeling an action as human error is an unfortunately effective way at ending an investigation (root cause: human error).

Some folks try to make progress on the current status quo by arguing that, since human error is inevitable (people make mistakes!), it should be the beginning of the investigation, rather than the end. I respect this approach, but I’m going to take a more extreme view here: we can gain insight into how incidents happen, even those that involve operator actions as contributing factors, without reference to human error at all.

Since we human beings are physical beings, you can think of us as machines. Specifically, we are machines that make decisions and take action based on those decisions. Now, imagine that every decision we make involves our brain trying to maximize a function: when provided with a set of options, it picks the one that has the largest value. Let’s call this function g, for goodness.

Pushing button A has a higher value of g than pushing button B.

(The neuroscientist Karl Friston has actually proposes something similar as a theory: organisms make decisions to minimize model surprise, a construct that Friston calls free energy).

In this (admittedly simplistic) model of human behavior, all decision making is based on an evaluation of g. Each person’s g will vary based on their personal history and based on their current context: what they currently see and hear, as well as other factors such as time pressure and conflicting goals. “History” here is very broad, as g will vary based not only on what you’ve learned in the past, but also on physiological factors like how much sleep you had last night and what you ate for breakfast.

Under this paradigm, if one of the contributing factors in an incident was the user pushing “A” instead of “B”, we ask “how did the operator’s g function score a higher value for pushing A over B”? There’s no concept of “error” in this model. Instead, we can explore the individual’s context and history to get a better understanding of how their g function valued A over B. We accomplish this by talking to them.

I think the model above is much more fruitful than the one where we identify errors or mistakes. In this model, we have to confront the context and the history that a person was exposed to, because those are the factors that determine how decisions get made.

The idea of human error is a hard thing to shake. But I think we’d be better off if we abandoned it entirely.

Some additional reading on the idea of human error:

She Blinded Me With Science: A review of Galileo’s Middle Finger

The science we are taught in school is nice and neat. However, the realities of scientific research, like all human endeavors, is messy, and has its share of controversies. There are two flavors of scientific controversy. There’s the political type of controversy, where people who are not part of the scientific community feel very strongly about the implications of the scientific theories: think climate change, or the Scopes Trial. Then there are controversies within a scientific community about theories. For example, the theory of plate tectonics was so controversial among geologists when it was proposed that it was considered pseudo-science.

Alice Dreger plants herself firmly in the intersection of political and scientific controversy. The book is a first-hand account of her experiences as an activist among various episodes of controversy. Here she’s defending an anthropologist from false accusations of deliberately harming the native Yanomamö people of South America, there she’s crusading against a medical researcher treating pregnant women with an off-label drug, as part of experimental research, without properly gathering informed consent.

The tragedy is that Dreger, a trained historian, isn’t able to tell a story effectively. Reading the book feels like listening to a teenager recounting interpersonal dramas going on at school. Her style is a strict linear account of the events from her perspective, but that doesn’t help the reader make sense of the events that’s going on. It’s too much chronology rather than narrative: “this happened, then that happened, then the other thing happened.” She loses the forest for the trees.

The result is a book about a fascinating topic, scientific controversies that intersect with politics, turns out to be a slog.

Bad Religion: A review of Work Pray Code

When I worked as a professor at the University of Nebraska—Lincoln, after being there for a few months, during a conversation with the chair of the computer science department he asked me “have you found a church community yet?” I had not. I had, however, found a synagogue. The choice wasn’t difficult: there were only two. Nobody asked me a question like that after I moved to San Jose, which describes itself as the heart of Silicon Valley.

Why is Silicon Valley so non-religious is the question that sociologist Carolyn Chen seeks to answer here. As a tenured faculty member at UC Berkeley, Chen is a Bay Area resident herself. Like so many of us here, she’s a transplant: she grew up in Pennsylvania and Southern California, and first moved to the area in 2013 to do research on Asian religions in secular spaces.

Chen soon changed the focus of her research from Asian religions to the work culture of tech companies. She observes that people tend to become less religious when they move to the area, and are less engaged in their local communities. Tech work is totalizing, absorbing employees entire lives. Tech companies care for many of the physical needs of their employees in a way that companies in other sectors do not. Tech companies provide meditation/mindfulness (the companies use these terms interchangeably) to help their employees stay productive, but it is a neutered version of the meditation of its religious, Buddhist roots. Tech companies push up the cost of living, and provide private substitutes for public infrastructure, like shuttle busses.

Chen tries to weave these threads together into a narrative about how work substitutes for religion in the lives of tech workers in Silicon Valley. But the pieces just don’t fit together. Instead, they feel shoehorned in to support her thesis. And that’s a shame, because, as a Silicon Valley tech worker, many of the observations themselves ring true to my personal experience. Unlike Nebraska, Silicon Valley really is a very secular place, so much so that it was a plot point in an episode of HBO’s Silicon Valley. As someone who sends my children to religious school, I’m clearly in the minority at work. My employer provides amenities like free meals and shuttles. They even provide meditation rooms, access to guided meditations provided by the Mental Health Employee Resource Group, and subscriptions to the Headspace meditation app. The sky-high cost of living in Silicon Valley is a real problem for the area.

But Chen isn’t able to make the case that her thesis is the best explanation for this grab bag of observations. And her ultimate conclusion, that tech companies behave more and more like cults, just doesn’t match my own experiences working at a large tech company in Silicon Valley.

Most frustratingly, Chen doesn’t ever seem to ask the question, “are there other domains where some of these observations also hold?” Because so much of the description of the secular and insular nature of Silicon Valley tech workers applies to academics, the culture that Chen herself is immersed in!

Take this excerpt from Chen:

Workplaces are like big and powerful magnets that attract the energy of individuals away from weaker magnets such as families, religious congregations, neighborhoods, and civic associations—institutions that we typically associate with “life” in the “work-life” binary. The magnets don’t “rob” or “extract”—words that we use to describe labor exploitation. Instead they attract the filings, monopolizing human energy by exerting an attractive rather than extractive force. By creating workplaces that meet all of life’s needs, tech companies attract the energy and devotion people would otherwise devote to other social institutions, ones that, traditionally and historically, have been sources of life fulfillment.

Work Pray Code, p197

Compare this to an excerpt from a very different book: Robert Sommer’s sardonic 1963 book Expertland (sadly, now out of print), which describes itself as “an unrestricted inside view of the world of scientists, professors, consultants, journals, and foundations, with particular attention to the quaint customs, distinctive dilemmas, and perilous prospects”.

Experts know very few real people. Except for several childhood friends or close relatives, the expert does not know anybody who drives a truck, runs a grocery store, or is vice-president of the local Chamber of Commerce. His only connection with these people is in some kind of service relationship; they are not his friends, colleagues, or associates. The expert feel completely out of place at Lion’s or Fish and Game meeting. If he is compelled to attend such gatherings, he immediately gravitates to any other citizen of Expertland who is present… He has no roots, no firm allegiances, and nothing to gain or lose in local elections… Because he doesn’t vote in local elections, join service clubs, or own the house he lives in, outsiders often feel that the expert is not a good citizen.

Expertland pp 2-3

Chen acknowledges that work is taking over the lives of all high-skilled professionals, not just tech workers. But I found work-life balance to be much worse in academia than at a Silicon Valley tech company! To borrow a phrase from the New Testament, And why beholdest thou the mote that is in thy brother’s eye, but considerest not the beam that is in thine own eye?

We value possession of experience, but not its acquisition

Imagine you’re being interviewed for a software engineering position, and the interviewer asks you: “Can you provide me with a list of the work items that you would do if you were hired here?” This is how the action item approach to incident retrospectives feels to me.

We don’t hire people based on their ability to come up with a set of work items. We’re hiring them for their judgment, their ability to make good engineering decisions and tradeoffs based on the problems that they will encounter at the company. In the interview process, we try to assess their expertise, which we assume they have developed based on their previous work experience.

Incidents provide us with excellent learning opportunities because they confront us with surprises. If we examine an incident in detail, we can learn something about our system behavior that we didn’t know before.

Yet, while we recognize the value of experienced candidates when we do hiring, we don’t seem to recognize the value of increasing the experience of our current employees. Incidents are a visceral type of experience, and reflecting on these sorts of experiences is what increases our expertise. But you have to reflect on them to maximize the value, and you have to share this information out to the organization so that it isn’t just the incident responders that can benefit from the experience.

To me, learning from incidents is about increasing the expertise of an organization by reflecting on and sharing out the experiences of surprising operational events. Action items are a dime a dozen. What I care about is improving the organization’s ability to engineer software.

Software engineering in-the-large: the coordination challenge

Back when I was an engineering student, I wanted to know “How do the big companies develop software? How does it happen in the real world?”

Now that I work at a company that has to do large-scale software development, I understand better why it’s not something you can really teach effectively in a university setting. It’s not that companies doing large-scale software development are somehow better at writing software than companies that work on smaller-scale software projects. It’s that large-scale projects face challenges that small-scale projects don’t.

The biggest challenge at large-scale is coordination. My employer provides a single service, which means that, in theory, any project that anyone is working on inside of the company could potentially impact what anybody else is working on. In my specific case, I work on delivery tools, so we might be called upon to support some new delivery workflow.

You can take a top-down command-and-control style approach to the problem, by having the people at the top attempting to filter all of the information to just what they need, and them coordinating everyone hierarchically. However, this structure isn’t effective in dynamic environments: as the facts on the ground change, it takes too long for information to work its way up the hierarchy, adapt, and then change the orders downwards.

You can take a bottoms-up approach to the problem where you have a collection of teams that work autonomously. But the challenge there is getting them aligned. In theory, you hire people with good judgment, and provide them with the right context. But the problem is that there’s too much context! You can’t just firehose all of the available information to everyone, that doesn’t scale: everyone will spend all of their time reading docs. How do you get the information into the heads of the people that need it? becomes the grand challenge in this context.

It’s hard to convey the nature of this problem in a university classroom, if you’ve never worked in a setting like this before. The flurry of memos, planning documents, the misunderstandings, the sync meetings, the work towards alignment, the “One X” initiatives, these are all things that I had to experience viscerally, on a first-hand basis, to really get a sense of the nature of the problem.