Skip to content

Reaction: Standardization versus Innovation

Should the Docker container image format be completely standardized? Or should Docker not be held back from evolving the format ahead of the open specification? This was the topic of a heated Twitter tussle last week between Google evangelist Kelsey Hightower and the creator of the Docker itself, Solomon Hykes. —New Stack

What is at stake here is the standardization versus innovation. Should Docker standardize their container technology or not?

On the one side is the belief that standardizing squashes innovation. Once you’ve standardized something, and other people start building on it, you can’t change the standard without a lot of agreement and effort—after all, other people are now depending on your product remaining the same across many cycles of development. This certainly kills innovation, as implementing new things both exposes your ideas to public view before you can implement them, and slows down the pace at which new ideas can be deployed in the real world.

On the other side is the belief that standardizing is necessary for the market to mature, and for a healthy ecosystem to develop that’s better for the entire community. How can customers and other vendors build products around a particular product if the product is always changing?

It seems like this is a straight up tradeoff between innovation and standardization—standardization hurts innovation, but helps usefulness. Interestingly, though, both of these arguments can be turned around.

On the one side is the charge that constant change stifles innovation. In an ecosystem of interacting players, with a stable base, is more likely to produce innovative ideas than a single company—no matter how smart, and no matter how good—working alone. Each company, including the initial innovator, is better off with standards in place, as they provide a base on which new things can be built, and new ideas are often aired and shared, providing fodder for ongoing idea creation.

On the other side is the belief that standardization will stifle the development of the product, cutting it far short of potential use cases and higher maturity levels. Standards introduce community process, and community process just means squabbling over things, rather than working on things. All the wasted energy diverted into “who gets to write what, who gets to lead what, and who gets to approve what,” could be put to better use building things.

This raises an interesting question—which one of these two pairs is correct? Which one should we listen to, and why? To make the problem worse, the standardization issue doesn’t just apply to Docker, it applies to just about every technology in the world, and particularly so in networking and computing.

How do we solve this? Are we doomed to forever choose between innovation and no standards, or stability with standards? Or is it the other way around—standards with innovation or community and ego laden processes?

Let me throw an idea out for your consideration—this is a false dichotomy. All technologies have always faced this problem across all of time. Should we standardize on tire sizes, or are we stopping innovation?

Which brings us to the solution that’s always been used in technology: standardizing interfaces, rather than standardizing things. To return to the tire example for a moment, we don’t have standard tires, we have standard tire sizes. Manufacturers are free to innovate within the parameters of the standard tire sizes, and consumers are free to buy whatever tire fits on their care.

The IETF has, at least until recently, taken the same tack. The IETf standardizes how a protocol acts on the wire, what inputs should produce which outputs, and leaves the rest to implementors. This has left the field open for new optimizations in implementations, while (theoretically) making certain implementations from different vendors will inter-operate.

This seems to be changing, though, in recent years. The IETF is now bogged down in thousand of pages of standards, and protocol operation specifications that detail the least little bit of each protocol. Part of this is because we’re trying to write standards that can’t be misread now, or because we want to make them wordy enough so they can mean anything (so every vendor can claim to follow the standard). These are social issues we aren’t going to solve easily—there is constant pressure to make standards more “useful” by overspecifying, and vendor’s lives easier (and competition less fierce) by underspecifying. There is one more thing in play here, though, that we don’t often think about.

We’ve decided we’re done with teacups, and it’s time to move on to oceans. No small lakes will do, we must solve the biggest problems the engineering world has in one fell swoop. The latest standard must not only solve stretched layer 2 over layer 3, for instance (an idea of questionable merit to begin with), it must do so in a way that solves every potential corner case—forever. We shall, once our new standard is done, never need another layer 2 over layer 3 specification again, in the history of the entire universe going forward. On top of this, we’ve no time to allow such a specific to develop—we must solve all these problems today.

In short, we’ve stopped building large monolithic systems. Instead, we’re building large monolithic standards. As if this is actually any healthier.

Maybe it’s time to return to the “old days,” and start solving any problem by breaking the problem itself into the smallest logical units. Then, instead of standardizing the solution to each of these problems, we can standardize the interfaces between the bits, and let the solutions develop over time. If we’re really smart, we’d make the interfaces between the bits of problem extensible—maybe we could use something like TLVs?—so solving previously unconsidered problems doesn’t mean writing things from the ground up again. Maybe we could even apply these sorts of principles to network design, as well.

Yeah, sure—there’s an idea.

But it’s idea that involves letting go of our egos and false dichotomies, and starting to think like engineers. When should we start?

Scroll To Top