Reaction: Openflow and Software Based Switching

Over at the Networking Nerd, Tom has an interesting post up about openflow—this pair of sentences, in particular, caught my eye—

The side effect of OpenFlow is that it proved that networking could be done in software just as easily as it could be done in hardware. Things that we thought we historically needed ASICs and FPGAs to do could be done by a software construct.

I don’t think this is quite right, actually… When I first started working in network engineering (wheels were square then, and dirt hadn’t yet been invented—but we did have solar flares that caused bit flips in memory), we had all software based switching. The Cisco 7200, I think, was the ultimate software based switching box, although the little 2ru 4500 (get your head out of the modern router line, think really old stuff here!) had a really fast processor, and hence could process packets really quickly. These were our two favorite lab boxes, in fact. But in the early 1990’s, the SSE was introduced, soldered on to an SSP blade that slid into a 7500 chassis.

The rest, as they say, is history. The networking world went to chips designed to switch packets (whether custom ASICs or merchant silicon), and hasn’t even begun to think about looking back. I think Tom’s two sentences mash up several different things into one large ball of thread that need to be disentangled just a little.

A good place to start is here: most commercially available packet switching silicon has some form of openflow available. The specific example Tom gives—implementing a filter across multiple switches in a network—actually doesn’t relate to software based switching as much as it does bringing policy out of the distributed control plane and into a controller. So if openflow really hasn’t change the nature of switching, then what has it changed?

I think what Tom is really getting at here is a shift in our perception of the nature of a router. In times past, we treated the entire router as something like an appliance. There was an operating system which contained an implementation of the distributed control plane, and there was hardware, which contained the switching hardware, LEDs, fans, connectors, and other stuff. Today, we think of the router as being at least two components, and moving to more.

There is the control plane, which is essentially an application riding on top of the operating system. There is the hardware “carrier,” which is essentially the sheet metal, LEDs, fans, etc. There is the switching hardware. And there is an interface between the switching hardware and the control plane.

Openflow, I think, primarily made us start to think about the relationship between these parts; it made us stop thinking of the router as an appliance, and start thinking of the router as a collection of parts. This led to the entire disaggregation movement, and tangentially the white box movement, we have today.

So, in a sense, Tom is correct—what we thought of as needing to be done by a monolithic “thing,” or “appliance,” we now think of as being done by a collection of parts that we can wrap together ourselves. This isn’t a matter of “what used to be done in hardware is now done in software,” although it is a rethinking of the relationship between hardware and software.

Regardless of whether or not Openflow ultimately succeeds or fails in it’s original form, or it’s later hyped form, or in some other form, we should all at least look back on Openflow as helping to bring about a major shift in the way we see networks. In this way, at least, Openflow has not failed.

2 Comments

  1. fstevenchalmers on 6 December 2016 at 5:19 pm

    Three observations:

    First, discussion of “software based routing” isn’t complete without at least a mention of DPDK and the use of x86 (albeit as an SIMD machine with the amazing tuning Intel has done over the last decade). This opens the NFV topic, within which basic routing is certainly a viable “function”.

    Second, while x86 SIMD programming is certainly not for novices, x86 programming is far more accessible to far more developers than calling the SDK APIs for a merchant ASIC. So particularly when the function to be performed is far more complex (or simply uses much larger tables) than a merchant ASIC can handle, it makes sense to develop that function as software on an NPU or x86 (or compiled from P4 for one of the new ASICs, but that’s a discussion for another day) rather than spend >US$10M developing a low volume custom ASIC. This is, of course, a choice not to be made lightly, given that a Broadcom Tomahawk can do basic L3 forwarding on about 3 billion packets per second (3 * 10^9) while a Xeon running DPDK on all cores would have to be well tuned and doing only the most basic forwarding to exceed 100 million (1 * 10^8), for about the same cost.

    Third, traditional networking has established a partitioning of work between what the host is responsible for (network stack, driver, NIC) and what the switches/routers are responsible for (collective control plane, RIB, FIB, forwarding itself). Looking at the data center in isolation, there is considerable inefficiency in the host application having a clear understanding of the traffic and QoS expectations when it opens a socket, and the network stack having a clear picture of end to end connections, while the L2/L3 switches/spines/routers are largely either reverse engineering this information from the flows or being explicitly configured for QoS and ACLs. Paradigms like Calico, as well as using application aware source based routing to override ECMP hashes, shift work which was strictly in the switch/router domain back to the server stack.

    OpenFlow is quite useful. The assumptions underlying the basic approach to OpenFlow do not necessarily hold in these new software defined paradigms.

    @FStevenChalmers



  2. Russ on 11 December 2016 at 9:44 pm

    Steve — thanks for stopping by and commenting. I think DPDK is interesting — but I would also point out that using DPDK on an Intel processor to switch packets is still, after all, “switching packets in hardware.” It will be interesting to watch to see what happens with large die processors that try to eat all the functionality under one roof over the coming years…

    🙂

    Russ