A few days ago, David Heinemeier Hansson (who generally goes by DHH) wrote a blog post titled Programmers should stop celebrating incompetence:
I disagreed with the post, but for different reasons than from most of the other responses I saw on twitter.
Here are a couple of lines from the post:
You can’t become the I HAVE NO IDEA WHAT I’M DOING dog as a professional identity. Don’t embrace being a copy-pasta programmer whose chief skill is looking up shit on the internet.
From the twitter reactions, it seems like people thought DHH was saying, “you shouldn’t be looking things up on the internet and copy-pasting code”. But I think that gets the thrust of his argument wrong. This wasn’t a diatribe against Stack Overflow, but it was about how programmers see themselves and their work.
DHH was criticizing a sort of anti-intellectualism mode of expression. The attitude he was criticizing reminds me of reading an essay (I can’t remember the source or author, it might have been Paul Lockhart) where a mathematics(?) professor was talking to some colleagues from the humanities department, and when the math professor mentioned their field, the humanities professor said, “Oh, I was never any good at math”, and it came off almost as a point of pride.
Where I disagree with DHH is that I don’t see this type of anti-intellectualism in our field at all. I don’t see “LOL, I don’t know what I’m doing” on people’s LinkedIn profiles or in their resumes, I don’t hear it in interviews, I don’t see it on pull request comments, I don’t hear it in technical meetings. I don’t think it exists in our field.
You can see our field’s professionalism in criticisms of technical interviews that involve live coding. You don’t hear programmers criticizing it by saying, “LOL, actually, nobody knows how to do this.” What you hear instead is, “these interviews don’t effectively evaluate my actual skills as a software developer”.
So, what’s going on here? What led DHH astray? Where does the dog meme come from?
To explain my theory, I’m going to use this recent blog post by Diomidis Spinnellis, called Rather than alchemy, methodical troubleshooting:
Spinellis is a software engineering professor who has written numerous books for practitioners and has contributed to numerous open source projects (including the FreeBSD kernel). He is as professional as they come.
His blog post is about his struggles getting a React Native project to build in Xcode, including trying (in vain) various bits of advice he found through Googling. Spinellis actually feels bad about his initial approach:
Although advice from the web can often help us solve tough problems in seconds, as the author of the book Effective Debugging, I felt ashamed of wasting time by following increasingly nonsensical advice.
I bring this up not to pile onto Spinellis, but to point out that the surface area of the software world is vast, so vast that even the most professional software engineer will encounter struggles, will hit issues outside of their expertise.
(As an aside: note that Spinellis does not solve the problem by developing a deep understanding of the failure mode, but instead by systematically eliminating the differences between a succeeding build and a failed one.)
In the book Designing Engineers, Louis Bucciarelli notes that Murphy’s Law and horror stories told by engineers are symptoms of the dissonance between the certainty of engineering models and the uncertainty of reality. I think the dog meme is another such symptom. It uses humor to help us deal with the fact that, no matter how skilled we become in our profession as software engineers, we will always encounter problems that extend beyond our area of expertise to understand.
To put it another way: the dog meme is a coping mechanism for professionals in dealing with a domain that will always throw problems at them that push them beyond their local knowledge. It doesn’t indicate a lack of professionalism. Instead, it calls attention to the ironies of professionalism in software engineering. Even the best software engineers still get relegated to Googling incomprehensible error messages.
Potential source of frustration that led to original post (“Programmers should stop celebrating incompetence”) – or at least how it resonated with me – is when in junior/senior or mentee/mentor communication the junior/mentee side expresses the uncertainty in their expertise and knowledge, and instead of acknowledging the uncertainty and suggesting the ways to improve that, senior side responds with “oh I’m just copypasting from Stack Overflow myself all of the time”, which makes the other side feel at ease but does not promote learning. I’ve observed similar behavior several times, but what I still disagree with is that it leads to incompetence – perhaps only in the extreme case when that’s the only advice mentor ever gives and the mentee is comfortable with that, but that IMO is never really the case.
Several thoughts.
I’m less interested in the purely programming aspect than in the theory that can be applied to all expert occupations. My interest in the broader programming situation is mainly in knowing that I am not yet good at programming, and that the specialties of programming are very wide and very difficult to navigate. I know roughly what specialty I am going to attempt, why it interests me, and that I could stand to fill in a lot of academic theory and hands on practice that I am currently missing.
The broader set of efforts to address gatekeeping and implement inclusion are clearly a bit misguided. Human processes are treated as if they are the same as manufacturing processes, and can be addressed by the same tools, or worse versions of the same tools. If your process is several steps of forging off a steel billet, with some form of inspection or measurement at each step, pulling the defective pieces is the least valuable way of using those measurements. You can estimate the state of the billet from the measurements, and use those state estimates to understand the steps of your process better. If you understand the steps, you can adjust them so that if a non-defective billet goes in, at each step it has a low chance of being broken. The fundamental problem with these methods on humans is that the human ‘state space’ is bigger than most widget ‘state spaces’, and that easily scaled measurements tell us relatively little about human ‘state space’. Many of these human ‘process improvements’ assume the results of earlier steps, and are a way for administrators to appear to be addressing problems, but blindly and by rote, such that they may make the situation worse. Their supervisors, today, will punish them for doing nothing, this is something, so even if they are aware of possibly making things worse, they may go forward anyway in order to retire before it becomes clear that things have become worse.
Long term education to train people to perform certain tasks as an adult has a very long lead time. Furthermore, the economic conditions and hence available activities are not really foreseeable very far in advance. Bureaucratically measuring and adjusting each stage of development may be entirely mistaken. At the very least, it is a much noisier and messier process than the tolerances that we are assuming that we can impose.
If we narrow our attention to the future of a profession, we probably do not have the information for making very fine changes in an informed way. We do not know the results of the ‘current process’, which is really an aggregation of very many individual ‘processes’, and possibly aggregated in invalid ways. We have past results of past processes. We do not remember every bit of the ‘past process’, and could not duplicate it if we wanted to. We have information about professional disasters from the previous training cohorts, and some guesses about future professional economic activity, so we can make some guesses about the value of preserving very near the status quo.
Occupations pass down mental tools that help address problems that frequently occur. We can look at a profession’s toolset, and compare it to the problems it is usually expected to solve. It is very fair to look at the development of a profession in how well tools matched to problems. It is also fair to look at a problem, and consider whether the tools are good or bad at solving it. Recently, I have enjoyed using the example of Aerospace Engineering. It is a relatively young discipline, and the early Cold War jet fighters were an example of designs produced using tools and experience that were not quite good enough for what they were intended to do. Supersonic flight dynamics did not initially have good tools ready, and after they were developed more forgiving jet fighters were designed. I can firmly believe that it may be correct for a profession to discuss internally ‘we have no idea what we are doing’. Externally? On the one hand, if you have few professional disasters, talking about ‘how little we know’ is bad advertising, and may unnecessarily frighten the laymen. On the other hand, it is good for your customers to have realistic expectations. Forex, of cases brought to trial, half the lawyer teams bringing them win, and half the teams lose. Gripping hand, expectations for future professionals are important; if you rely on them learning ‘the real way things work’ through tribal knowledge and experience, some of them won’t, and may be problem.
Lastly, anti-intellectualism may be a correct answer. When you look at a occupation, and consider how it can fail as an institution, it becomes obvious that there is an incorrect model of an expert’s function. It is related to ‘who guards the guards?’ The incorrect model is that only specialty experts can judge experts in that specialty, that such recognized experts are the guide to the worth of the work done. Experts and professionals, like many others, are doing specialized work in an economy. The value of the expert’s work is only created in conjunction with the economic activity of others, and those others, specifically, may have a lot of information about the value of the work. Whether it is easier to work with the expert, or to work without the expert and eat the cost of not hiring an expert is one of the key criteria. One can have a great deal of experience with a problem, have the best mental tools for solving the problem, and if there is no economic activity that requires solving that problem, one is not an expert. Why is that important? Well, we have professional organizations trying to manage certain occupations, and trying to preserve the future of making money with those skills. Some of the people in these organizations may assume that everything is credentials, the public has no choice but to trust, and that they could never profoundly alienate their customers in a very bad way. People completely oriented towards credentials often overlap with wannabe intellectuals; anti-intellectualism is an antidote to overvaluing credentials, technocratic ambitions, and other such things.