Docker Forks the Open Source Bubble

The magic of open source.

If I’ve heard this once, I’ve heard it a thousand times.

Put the software “out there,” and someone, somewhere, will add features because they need or want them, fix bugs because they’ve run into them, and generally just add value to the software you’ve created for free.

This is why, I’m told, open source is so much better than open standards—isn’t open standards just another name for a bogged down, broken process where vendors try to run in fourteen different directions at once? Where customers really aren’t heard for the din of careers being made, and technical solutions far too often take a back seat to political considerations? Open source is going to ride in and save the day, I’m told, making all complex software free and better.

Unicorns. No, seriously. Or maybe you prefer frogs on stilts. It doesn’t work this way in the real world. If any project, whether it be an open source project or an open standard, gains enough community buy-in, it will succeed. If any project, whether it be an open source project or an open standard, doesn’t gain community buy-in, it is dead—no matter which company supports it, no matter which standards body writes it, etc.

To put it more starkly: community overrides open anything.

Building community requires having common goals. While it might be nice to say, “our common goal is to build something together,” this sort of goal almost never works for very long. Instead, there must be some purpose, or (as philosophers might say) telos involved. In the specific case of docker, the community itself is splitting over how best to build for enterprise versus provider deployments (a divide I still consider artificial, by the way). What this comes down to is a matter of supporting different sorts of deployments.

Some might say, “let a thousand flowers bloom! more projects is better projects! this is the magic of open source!” And then they say, “can’t those IETF folks get their acts together, and write one standard to tunnel packets, rather than 14?” Pot, meet kettle. The problem is the same in both cases—a smaller number of hands working on a particular project, some features going here and not going there (that might really be desirable in both places—and yet too hard to build in both places), and confusion for those who are trying to build something new out of the many scattered pieces laying all over the floor.

There are two ways to solve this sort of problem. The first is to have multiple vertically integrated systems/products/protocols. The second is to have multiple components that can be fit together to make any solution. Both of these solutions will work to some degree or another, and both fail in their extreme versions—regardless of whether the product is commercially supported, the standard is open, or the project is open source.

There are limits on all of these things—vendor driven, open source, and open standard. The sooner we stop looking for unicorns in any one place, and start being engineers who know the tradeoffs between different options, and the limits of any community (commercial or not), the sooner we can stop arguing over which is better, and start working on actually solving problems.

For Docker, specifically? I don’t know the answer, but in the long run (it seems to me) some sort of fork is inevitable. The real question is—will the fork be done right, or wrong? Only time will tell.[/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]