With apologies to Ken Iverson.
Architects draw detailed plans before a brick is laid or a nail is hammered. Programmers and software engineers don’t. Can this be why houses seldom collapse and programs often crash?
Blueprints help architects ensure that what they are planning to build will work. “Working” means more than not collapsing; it means serving the required purpose. Architects and their clients use blueprints to understand what they are going to build before they start building it.
But few programmers write even a rough sketch of what their programs will do before they start coding.Leslie Lamport, Why We Should Build Software Like We Build Houses
“My instinct is to go right to the board. I’m very graphic oriented. I can’t talk more than ten minutes without I [sic] start drawing pictures when we’re talking about the things I do. Even if I’m talking sports, I invariably start diagramming what’s going on. I feel comfortable with it or find it very effective.”
This designer, like the newly promoted engineer at Selco who fought to get her drafting board back, is pointing out his dependence on the visual process, of drawing both to communicate and to think out the initial design. He also states that the visual and manual thought process of drawing precedes the formulation of the written specifications for the project. Like the Selco designers and those at other sites, he emphasized the importance of drawing processes to work out ideas. [emphasis added]Kathryn Henderson, On Line and On Paper: Visual Representations, Visual Culture, and Computer Graphics in Design Engineering
The quote above by Kathryn Henderson illustrates how mechanical engineers use the act of drawing to help them work on the design problem. By generating sketches and drawings, they develop a better understanding of the problem they trying to solve. They use drawing as a tool to help them think, to work through the problem.
As software engineers, we don’t work in a visual medium in the way that mechanical engineers do. And yet, we also use tools to help us think through the problem. It just so happens that the tool we use is code. I can’t speak for other developers, but I certainly use the process of writing code to develop a deeper understanding of the problem I’m trying to solve. As I solve parts of the problem with code, my mental model of the problem space and solution space develops throughout the process.
I think we use coding this way (I certainly do), because it feels to me like the fastest way to evolve this knowledge. If I had some sort of equivalent mechanism for sketching that was faster than coding for developing my understanding, I’d use it. But I know of no mechanism that’s actually faster than coding that will let me develop my understanding of the solution I’m working on. It just so happens that the quickest solution, code, is the same medium as the artifact that will ultimately end up in production. A mechanical engineer can never ship their sketches, but we can ship our code.
And this is a point that I think Leslie Lamport misses. I’m personally familiar with a number of different techniques for modeling in software, including TLA+, Alloy, statecharts, and decision tables. I’ve used them all, they are excellent tools for reasoning about the complexity of the system. But none of these tools really fulfill the role that sketching does for mechanical engineers (although Alloy’s fast feedback for incremental model building is a nice step in this direction).
TLA+ in particular is a wonderful tool. I’ve used it for things like understanding the linearizability paper, beating the CAP theorem, finding cycles in linked lists, solving the river crossing problem, and proving leftpad. But if you’re looking for an analogy in mechanical engineering, TLA+ is much closer to finite element analysis than it is to sketches.
Developers jump to coding not because they are sloppy, but because they have found it to be the most effective tool for sketching, for thinking about the problem and getting quick feedback as they construct their solution. And constructing a representation to develop a better understanding using the best tools they have available for the job to get quick feedback is what engineers do.
3 thoughts on “Coding as a tool of thought”
The problem is at both ends I find.
#1. Real world architecture does fail, bridges do fall down, buildings do collapse and exceptional circumstances do bring great towers and scrapers to the ground.
#2. There needs to be some sort of certification, but not stifling.. over time software has moved from automating a 20 minute business process to self driving cars. What used to cost 20 minutes may now cost a life.
It’s much harder to use the prototype bridge in production than it is for software, and I find it’s this easy line to cross in software that accounts for most technical issues.
One of Petrowski’s books makes an argument about bridge designs. When a new ambitious bridge according to an establish fails, the problem was in previous bridges of that design, simply not exposed by the previous loading. The drive for longer spans eventually will expose the existing flaws in a bridge design scheme. He also argued that failures in certain types of bridge design were periodic, about every twenty years, and due to turnover in the civil engineers. Subtle tribal knowledge from the last failure doesn’t get passed down, and is relearned again with the next failure.
It looks to me like we have at least three modes of safety issue, or potential issue, show up with automating heavy machinery.
We have software companies, used to doing a lot of different software projects, and maybe not deeply familiar with the pitfalls of any one domain, pushing into fields with perhaps a very different history of engineering. With them they may bring funding that is not familiar enough with the new field to have a deep understanding of what risks are desirable to be very way of.
We have regulators, sold on the improvement, requiring existing companies to implement automation, perhaps so rapidly that the companies bake in problems like architecture that is fundamentally flawed from a security perspective, or from a perspective of accounting for accidents. (If you have an automated vehicle, you need a black box system for the accident investigations that cannot be compromised with a wireless software update.)
We also have some vehicle companies that in theory should know better, that are producing automation that is not good enough. Trent Telenko alleges that certain recent aviation incidents may have been a result of military contracting suspending the requirement for the ‘Systems Engineering Management’ specification. Modern contractor engineers are apparently not getting that experience on the defense side of the business, and hence not bringing it with them to civilian designs. Others, more frequently, allege that the merger with McDonnell Douglas killed the old effective management culture, and now it operates according to McDonnell’s culture.
Airplane crashes and bridge collapses are easy to explain to investors, easy to show them why such are worth the cost of avoiding, or the delays. Flaws in software, you have to actively deduce or test for. Mechanical flaws? Well, sometimes ones existing during construction are exposed by loading very early in the life of the structure. Others? Every bit of metal larger than tiny has flaws at the time of manufacture, and under periodic loading they grow, first slow, then fast. You want to inspect often enough to catch them when they are large enough to find, but while growth is still slow. But, inspection is expensive, and structural and maintenance engineers are working hard to find a cheaper way that is still safe enough. Some bridge failures are failed attempts, or discovering something and lacking the courage or leverage to stop use before lives are lost. Of course, with some newer materials, we have less information about mechanical failure modes, and more chances for genuine surprises of the unpleasant variety.
Still, compared to a mechanical system, it is much easier to build a very flawed software system, and fail to expose the flaws in operational tests. To my knowledge this is partly because of software domain knowledge being so very specialized, it being harder to transfer competence than for a bridge designer to learn dam design.
Another apparent issue, in software, engineering, the skilled trades, and unskilled labor seem to be largely labeled under the umbrella of programming. Learning and verification of learning may be more of an issue. You can definitely find mechanical engineers who are wildly over confident, and do not respect the challenges and expertise of welders and machinists. But, your welders may understand that they are not machinists, your mechanical managers may understand not to substitute skillsets, and your investors may understand that skimping on certified welders or machinists may have costs. It may even be written into your contracts. That seems to be a lot less true in software.
I, personally, have been wildly overconfident in my training in the past. I have learned otherwise, at some personal cost, fortunately with no fatalities or outstanding work place risk exposure. I am trying to learn programming, partly through self study. I have been burned once, and I certainly hope I’m being cautious enough now. I definitely know I do not have all of the programming skills that I would need. I’m guessing that my opinion about the level of caution expressed in ‘how to learn programming’ documentation may be a bit informed. Okay, I may personally be erring on the wildly over cautious side, but I trust that instinct.
Might be because I am a terrible programmer, but I do at times find sketching productive before trying to make a program.