For want of a dollar

Back in August, The New York Times ran a profile of Morris Chang, the founder of TSMC.

It’s hard to overstate the role that this Taiwan-based semiconductor company plays in the industry. If you search for articles about it, you’ll see headlines like TSMC: The Most Important Tech Company You Never Heard Of and TSMC: how a Taiwanese chipmaker became a linchpin of the global economy.

What struck me in the NY Times article was this anecdote about Chang’s search for a job after he failed out of a Ph.D. program at MIT in 1955 (emphasis mine):

Two of the best offers arrived from Ford Motor Company and Sylvania, a lesser-known electronics firm. Ford offered Mr. Chang $479 a month for a job at its research and development center in Detroit. Though charmed by the company’s recruiters, Mr. Chang was surprised to find the offer was $1 less than the $480 a month that Sylvania offered.

When he called Ford to ask for a matching offer, the recruiter, who had previously been kind, turned hostile and told him he would not get a cent more. Mr. Chang took the engineering job with Sylvania. There, he learned about transistors, the microchip’s most basic component.

“That was the start of my semiconductor career,” he said. “In retrospect, it was a damn good thing.”

The course of history changed because an internal recruiter Ford refused to offer him an additional dollar a month ($11.46 in 2023 dollars) to match a competing offer!

This is the sort of thing that historians call contingency.

Missing the forest for the trees: the component substitution fallacy

Here’s a brief excerpt from a talk by David Woods on what he calls the component substitution fallacy (emphasis mine):

Everybody is continuing to commit the component substitution fallacy.

Now, remember, everything has finite resources, and you have to make trade-offs. You’re under resource pressure, you’re under profitability pressure, you’re under schedule pressure. Those are real, they never go to zero.

So, as you develop things, you make trade offs, you prioritize some things over other things. What that means is that when a problem happens, it will reveal component or subsystem weaknesses. The trade offs and assumptions and resource decisions you made guarantee there are component weaknesses. We can’t afford to perfect all components.

Yes, improving them is great and that can be a lesson afterwards, but if you substitute component weaknesses for the systems-level understanding of what was driving the event … at a more fundamental level of understanding, you’re missing the real lessons.

Seeing component weaknesses is a nice way to block seeing the system properties, especially because this justifies a minimal response and avoids any struggle that systemic changes require.

Woods on Shock and Resilience (25:04 mark)

Whenever an incident happens, we’re always able to point to different components in our system and say “there was the problem!” There was a microservice that didn’t handle a certain type of error gracefully, or there was bad data that had somehow gotten past our validation checks, or a particular cluster was under-resourced because it hadn’t been configured properly, and so on.

These are real issues that manifested as an outage, and they are worth spending the time to identify and follow up on. But these problems in isolation never tell the whole story of how the incident actually happened. As Woods explains in the excerpt of his talk above, because of the constraints we work under, we simply don’t have the time to harden the software we work on to the point where these problems don’t happen anymore. It’s just too expensive. And so, we make tradeoffs, we make judgments about where to best spend our time as we build, test, and roll out our stuff. The riskier we perceive a change, the more effort we’ll spend on validation and rollout of the change.

And so, if we focus only on issues with individual components, there’s so much we miss about the nature of failure in our systems. We miss looking at the unexpected interactions between the components that enabled the failure to happen. We miss how the organization’s prioritization decisions enabled the incident in the first place. We also don’t ask questions like “if we are going to do follow-up work to fix the component problems revealed by this incident, what are the things that we won’t be doing because we’re prioritizing this instead?” or “what new types of unexpected interactions might we be creating by making these changes?” Not to mention incident-handling questions like “how did we figure out something was wrong here?”

In the wake of an incident, if we focus only on the weaknesses of individual components then we won’t see the systemic issues. And it’s the systemic will continue to bite us long after we’ve implemented all of those follow-up action items. We’ll never see the forest for the trees.

The down side of writing things down

Starting after World War II, the idea was culture is accelerating. Like the idea of an accelerated culture was just central to everything. I feel like I wrote about this in the nineties as a journalist constantly. And the internet seemed like, this is gonna be the ultimate accelerant of this. Like, nothing is going to accelerate the acceleration of culture like this mode of communication. Then when it became ubiquitous, it sort of stopped everything, or made it so difficult to get beyond the present moment in a creative way.

Chuck Klosterman, interviewed on the Longform Podcast

We software developers are infamous for our documentation deficiencies: the eternal lament is that we never write enough stuff down. If you join a new team, you will inevitably discover that, even if some important information is written down, there’s also a lot of important information that is tacit knowledge of the team, passed down as what’s sometimes referred to as tribal lore.

But writing things down has a cost beyond the time and effort required to do the writing: written documents are durable, which means that they’re harder to change. This durability is a strength of documentation, but it’s also a weakness. Writing things down has a tendency to ossify the content, because it’s much more expensive to update than tacit knowledge. Tacit knowledge is much more fluid: it adapts to changing circumstances much more quickly and easily than updating documentation, as anybody who has dealt with out-of-date written procedures can attest to.

Software engineering in-the-large: the coordination challenge

Back when I was an engineering student, I wanted to know “How do the big companies develop software? How does it happen in the real world?”

Now that I work at a company that has to do large-scale software development, I understand better why it’s not something you can really teach effectively in a university setting. It’s not that companies doing large-scale software development are somehow better at writing software than companies that work on smaller-scale software projects. It’s that large-scale projects face challenges that small-scale projects don’t.

The biggest challenge at large-scale is coordination. My employer provides a single service, which means that, in theory, any project that anyone is working on inside of the company could potentially impact what anybody else is working on. In my specific case, I work on delivery tools, so we might be called upon to support some new delivery workflow.

You can take a top-down command-and-control style approach to the problem, by having the people at the top attempting to filter all of the information to just what they need, and them coordinating everyone hierarchically. However, this structure isn’t effective in dynamic environments: as the facts on the ground change, it takes too long for information to work its way up the hierarchy, adapt, and then change the orders downwards.

You can take a bottoms-up approach to the problem where you have a collection of teams that work autonomously. But the challenge there is getting them aligned. In theory, you hire people with good judgment, and provide them with the right context. But the problem is that there’s too much context! You can’t just firehose all of the available information to everyone, that doesn’t scale: everyone will spend all of their time reading docs. How do you get the information into the heads of the people that need it? becomes the grand challenge in this context.

It’s hard to convey the nature of this problem in a university classroom, if you’ve never worked in a setting like this before. The flurry of memos, planning documents, the misunderstandings, the sync meetings, the work towards alignment, the “One X” initiatives, these are all things that I had to experience viscerally, on a first-hand basis, to really get a sense of the nature of the problem.

Software Misadventures Podcast

I was recently a guest on the Software Misadventures Podcast.

Podcast update and news! Software Misadventures

Some reflections on running the podcast and Ronak has some eggciting news to share 🙂   Music: Vlad Gluschenko — Forest License: Creative Commons Attribution 3.0 Unported: https://creativecommons.org/licenses/by/3.0/deed.en
  1. Podcast update and news!
  2. Uncrating the Oxide Rack | Bryan Cantrill, Steve Tuck (Oxide)
  3. LLMs are like your weird, over-confident intern | Simon Willison (Datasette)
  4. From "AI mid-life crisis" to the "time of my life" | Steve Yegge (Sourcegraph)
  5. Early Twitter's fail-whale wars | Dmitriy Ryaboy

StaffEng podcast

I had fun being a guest on the StaffEng podcast.

Alex Kessinger (Stitch Fix) and David Noël-Romas (Stripe) StaffEng

This episode is a celebration of the journey we have been on as this podcast comes to a close. We have had such a great time bringing you these interviews and we are excited about a new chapter, taking the lessons we have learned forward into different spaces. It's been a lot of work putting this show together, but it has also been such a pleasure doing it. And, as we all know, nothing good lasts forever! So to close the circle in a sense, we decided to host a conversation between the two of us where we interview each other as we have with our guests in the past, talking about mentorship, resources, coding as a leader, and much more! We also get into some of our thoughts on continuous delivery, prioritizing work, our backgrounds in engineering, and how to handle disagreements.  As we enter new phases in our lives, we want to thank everyone for tuning in and supporting us and we hope to reconnect with you all in the future!LinksDavid Noël-Romas on TwitterAlex Kessinger on TwitterStitch FixStripeJavaScript: The Good PartsDouglas CrockfordMonkeybrainsKill It With FireTrillion Dollar CoachMartha AcostaEtsy Debriefing Facilitation GuideHigh Output Management How to Win Friends & Influence PeopleInfluence
  1. Alex Kessinger (Stitch Fix) and David Noël-Romas (Stripe)
  2. Peter Stout (Netflix)
  3. James Cowling (Convex)
  4. Bryan Berg (Stripe)
  5. Ben Ilegbodu (Stitch Fix)