Light/No Blogging this Week

I’m trying to get through the final bits of this new book (which should publish at the end of December, from what I understand), and the work required for a pair of PhD seminars (a bit over 50 pages of writing). I probably won’t post anything this week so I can get caught up a little, and I might not be posting heavily next week.

I’ll be at SDxE in Austin Tuesday and Wednesday, if anyone wants to find me there.

On the ‘net: Fragmentation and IPv6

Does this mean we ban all filtering of traffic on the public Internet, imposing the end-to-end rule in earnest, leaving all security to the end hosts? This does seem to be the flavor of the original IPv6 discussions around stateful packet filters. This does not, however, seem like the most realistic option available; the stronger defense is not a single perfect wall, but rather a series of less than perfect walls. Defense in depth will beat a single firewall every time. Another alternative is to accept another bit of reality we often forget in the network engineering world: abstractions leak. The end-to-end principle describes a perfectly abstracted system capable of carrying traffic from one host to another, and a perfectly abstracted set of hosts between which traffic is being carried.

The full post can be read over at the ECI blog.

What Kind of Design?

In this short video I work through two kinds of design, or two different ways of designing a network. Which kind of designer are you? Do you see one as better than the other? Which would you prefer to do, are you right now?

Reaction: Networking Vendors are Only Good for the Free Lunch

I ran into an article over at the Register this week which painted the entire networking industry, from vendors to standards bodies, with a rather broad brush. While there are true bits and pieces in the piece, some balance seems to be in order. The article recaps a presentation by Peyton Koran at Electronic Arts (I suspect the Register spiced things up a little for effect); the line of argument seems to run something like this—

  • Vendors are only paying attention to larger customers, and/or a large group of customers asking for the same thing; if you are not in either group, then you get no service from any vendor
  • Vendors further bake secret sauce into their hardware, making it impossible to get what you want from your network without buying from them
  • Standards bodies are too slow, and hence useless
  • People are working around this, and getting to the inter-operable networks they really want, by moving to the cloud
  • There is another way: just treat your networking gear like servers, and write your own protocols—after all you probably already have programmers on staff who know how to do this

Let’s think about these a little more deeply.

This article was cross posted to CircleID

Vendors only pay attention to big customers and/or big markets. Ummm… Yes. I do not know of any company that does anything different here, including the Register itself. If you can find a company that actually seeks the smallest market, please tell me about them, so I can avoid their products, as they are very likely to go out of business in the near future. So this is true, but it is just a part of the real world.

Vendors bake secret sauce into their hardware to increase their profits. Well, again… Yes. And how is any game vendor any different, for instance? Or what about an online shop that sells content? Okay, next.

Standards bodies are too slow, and hence useless. Whenever I hear this complaint, I wonder if the person making the complaint has actually ever built a real live running system, or a real live deployed standard that provides interoperability across a lot of different vendors, open source projects, etc. Yes, it often seems silly how long it takes for the IETF to ratify something as a standard. But have you ever considered how many times things are widely implemented and deployed before there is a standard? Have you ever really looked at the way standards bodies work to understand that there are many different kinds of standards, each of which with a different meaning, and that not everything needs to be the absolute tip top rung on the standards ladder to be useful? Have you ever asked how long it takes to build anything large and complicated? I guess we could say the entire open source community is slow and useless because it took many years for even the Linux operating system to be widely deployed, and to solve a lot of problems.

Look, I know the IETF is slow. And I know the IETF has a lot more politics than it should. I live both of those things. But I also know the fastest answer is not always the right answer, and throwing away decades of experience in designing protocols that actually work is a pretty dumb idea—unless you really just want to reinvent the wheel every time you need to build a car.

In the next couple of sentences, we suddenly find that someone needs to call out the contradiction police, replete in their bright yellow suits and funny hats. Because now it seems people want inter-operable networks without standards bodies! Let make a simple point here many people just do not seem to realize:

You cannot have interoperability across multiple vendors and multiple open source projects, without some forum where they can all discuss the best way to do something, and find enough common ground to make their various products inter-operate.

I hate to break the news to you, but that forum is called a standards body.

In the end, if you truly want every network to be a unique snowflake, groaning under the technical debt of poor decisions made by a bunch of folks who know how to code up a UI, but do not understand the intimate details of how a network actually converges in the real world, feel free to abandon the standards, and just throw the problem to any old group of coders you have handy.

Let me know how it turns out—but remember, I am not the one who has to answer the phone at 2AM when your network falls over, killing your entire business.

People are working around this by moving to the cloud. Yep—this is what every company I’ve talked to who is moving to the cloud has said to me: “We’re doing it to get to inter-operable networks.” ’nuff said.

There is a better way. On this I can agree entirely. But the better way is not to build each network into a unique snowflake, nor to abandon standards. There is a real path forward, but as always it will not be the apparently easy path of getting mad at vendors and the IETF, and making the bald statement you can build it all on your own. The real path forward looks something like this—

  • Learn to be, and build, real engineers, rather than CLI slingers
  • Rationally assess the problems that need to be solved to build the network your organization needs
  • Choose a set of solutions that seem right to solve that set of problems (and I don’t mean appliances here!)
  • Look around for implementations of those things (open source and commercial), take in lessons others have learned, and refine the solution set; in other words, don’t abandon years of experience, but rather leverage it
  • If the solution set doesn’t exist, decide how you can break the solution set into reasonable pieces
  • Figure out which pieces you should outsource, which you should not, and what the API looks like between these two
  • Build it

Oh, and along the way—rather than complaining about standards bodies, get involved in them. There are far too few people who even make an attempt at changing what is there, and far too many who just whine about it. You don’t need to be involved in every IETF or W3C mailing list to be “involved;” you can pick a narrow realm to be useful in and make a real difference. Far too many people see these bodies as large monoliths; either you must be involved in everything, or nothing. This is simply not true.