What do you work on, anyway?

I often struggle to describe the project that I work on at my day job, even though it’s an open-source project that even has its own domain name: managed.delivery. I’ll often mumble something like, “it’s a declarative deployment system”. But that explanation does not yield much insight.

I’m going to use Kubernetes as an analogy to explain my understanding of Managed Delivery. This is dangerous, because I’m not a Kubernetes user(!). But if I didn’t want to live dangerously, I wouldn’t blog.

With Kubernetes, you describe the desired state of your resources declaratively, and then the system takes action to bring the current state of the system to the desired state. In particular, when you use Kubernetes to launch a pod of containers, you need to specify the container image name and version to be deployed as part of the desired state.

When a developer pushes new code out, they need to change the desired state of a resource, specifically, the container image version. This means that a deployment system needs some mechanism for changing the desired state.

A common pattern we see is that service owners have a notion of an environment (e.g., test, staging, prod). For example, maybe they’ll deploy the code to test, and maybe run some automated tests against it, and if it looks good, they’ll promote to staging, and maybe they’ll do some manual tests, and if they’re happy, they’ll promote out to prod.

Example of deployment environments

Imagine test, staging, and prod all have version v23 of the code running in it. After version v24 is cut, it will first be deployed in test, then staging, then prod. That’s how each version will propagate through these environments, assuming it meets the promotion constraints for each environment (e.g., tests pass, human makes a judgment).

You can think of this kind of promoting-code-versions-through-environments as a pattern for describing how the desired states of the environments changes over time. And you can describe this pattern declaratively, rather than imperatively like you would with traditional pipelines.

And that’s what Managed Delivery is. It’s a way of declaratively describing how the desired state of the resources should evolve over time. To use a calculus analogy, you can think of Managed Delivery as representing the time-derivative of the desired state function.

If you think of Kubernetes as a system for specifying desired state, Managed Delivery is a system for specifying how desired state evolves over time

With Managed Delivery, you can say express concepts like:

  • for a code version to be promoted to the staging environment, it must
    • be successfully deployed to the test environment
    • pass a suite of end-to-end automated tests specified by the app owner

and then Managed Delivery uses these environment promotion specifications to shepherd the code through the environments.

And that’s it. Managed Delivery is a system that lets users describe how the desired state changes over time, by letting them specify environments and the rules for promoting change from one from environment to the next.

Chasing down the blipperdoodles

To a first approximation, there are two classes of automated alerts:

  1. A human needs to look at this as soon as possible (page the on-call!)
  2. A human should eventually investigate, but it isn’t urgent (email-only alert)

This post is about the second category. These are events like the error spikes that happened at 2am that can wait until business hours to look into.

When I was on the CORE1 team, one of the responsibilities of team members was to investigate these non-urgent alert emails. The team colorfully referred to them as blipperdoodles2, presumably because they look like blips on the dashboard.

I didn’t enjoy this part of the work. Blipperdoodles can be a pain to track down, are often not actionable (e.g., networking transient), and, in the tougher cases, are downright impossible to make sense of. This means that the work feels largely unsatisfying. As a software engineer, I’ve felt a powerful instinct to dismiss transient errors, often with a joke about cosmic rays.

But I’ve really come around on the value of chasing down blipperdoodles. Looking back, they gave me an opportunity to practice doing diagnostic work, in a low-stakes environment. There’s little pressure on you when you’re doing this work, and if something more urgent comes up, the norms of the team allow you to abandon your investigation. After all, it’s just a blipperdoodle.

Blipperdoodles also tend to be a good mix of simple and difficult. Some of them are common enough that experienced engineers can diagnose them by the shape of the graphs. Others are so hard that an engineer has to admit defeat once they reach their self-imposed timebox for the investigation. Most are in between.

Chasing blipperdoodles is a form of operational training. And while it may be frustrating to spend your time tracking down anomalies, you’ll appreciate the skills you’ve developed when the heat is on, which is what happens when everything is on fire.

1 CORE stands for Critical Operations & Reliability Engineering. They’re the centralized incident management team at Netflix.

2I believe Brian Trump coined the term.

Operations engineering

Operations Engineering is the application of software engineering practices and principles to achieve and sustain operational excellence.

The quote above is from a re:Invent talk given by Josh Evans at Netflix. The phrasing appeals to me because it explicitly links operations and software engineering. I also recommend the talk if you’re interested in the topic of operations engineering at Netflix. (For context: Josh is my manager’s manager’s manager).