Rethinking BGP on the DC Fabric

Everyone uses BGP for DC underlays now because … well, just because everyone does. After all, there’s an RFC explaining the idea, every tool in the world supports BGP for the underlay, and every vendor out there recommends some form of BGP in their design documents.

I’m going to swim against the current for the moment and spend a couple of weeks here discussing the case against BGP as a DC underlay protocol. I’m not the only one swimming against this particular current, of course—there are at least three proposals in the IETF (more, if you count things that will probably never be deployed) proposing link-state alternatives to BGP. If BGP is so ideal for DC fabric underlays, then why are so many smart people (at least they seem to be smart) working on finding another solution?

But before I get into my reasoning, it’s probably best to define a few things.

In a properly design data center, there are at least three control planes. The first of these I’ll call the application overlay. This control plane generally runs host-to-host, providing routing between applications, containers, or virtual machines. Kubernetes networking would be an example of an application overlay control plane.

The second of these I’ll call the infrastructure overlay. This is generally going to be eVPN running BGP, most likely with VXLAN encapsulation, and potentially with segment routing for traffic steering support. This control plane will typically run on either workload supporting hosts, providing routing for the hypervisor or internal bridge, or on the Top of Rack (ToR) routers (switches, but who knows what “router” and “switch” even mean any longer?).

Now notice that not all networks will have both application and infrastructure overlays—many data center fabrics will have one or the other. It’s okay for a data center fabric to only have one of these two overlays—whether one or both are needed is really a matter of local application and business requirements. I also expect both of these to use either BGP or some form of controller-based control plane. BGP was originally designed to be an overlay control plane; it only makes sense to use it where an overlay is required.

I’ll call the third control plane the infrastructure underlay. This control plane provides reachability for the tunnel head- and tail-ends. Plain IPv4 or IPv6 transport is supported here; perhaps some might inject MPLS as well.

My argument, over the next couple of weeks, is BGP is not the best possible choice for the infrastructure underlay. What I’m not arguing is every network that runs BGP as the infrastructure underlay needs to be ripped out and replaced, or that BGP is an awful, horrible, no-good choice. I’m arguing there are very good reasons not to use BGP for the infrastructure underlay—that we need to start reconsidering our monolithic assumption that BGP is the “only” or “best” choice.

I’m out of words for this week; I’ll begin the argument proper in my next post… stay tuned.