Nate Sales

The software defined data plane and my ideal edge routing platform

July 8, 2020

My requirements in a router are a bit unique. I run a tiny network by comparison to the incumbent “cloud” service providers, but at the same time I run multihomed anycast and unicast services and need a routing platform that can handle that kind of setup. I also strive to maintain a flexible and expandable network topology that allows me to make the most efficient routing decision with the ability quickly reroute traffic if needed.

A few years ago, microservices and containerization became very popular, allowing developers to segment their applications into logical boundaries whether for security, simplicity, or to scale. Network architecture can also follow a similar model.

In a traditional sense we have the separation of the control plane and data plane. It’s common to see vendors like Arista and Juniper using dedicated switching ASICs for the data plane paired with a normal CPU running Linux for the control plane. There are some exceptions to this however, with certain models supporting hardware offloading for certain protocols more traditionally thought of as living in the control plane.

This is cool and all, and I always enjoy reading about the huge packet throughput and bandwidth that these boxes have to offer. But there really aren’t enough options out there for a router that can handle millions of routes at line rate of 10G or higher. Admittedly this is a specific market, one popular use case being route reflectors that aggregate many transit providers and customers.

For me, this has more to do with flexibility then anything else. I tinker, upgrade, and rebuild my network frequently so I like to be able to have a routing platform that supports this type of workflow. At the same time, I can’t have the network go down when I’m working on it, since I host tings like DNS and Email that would cause issues if the servers were down for even a short time.

All in all, my list of router requirements is as follows:

  • Ability to handle >3M total routes in the RIB
  • Line rate and non-blocking
  • 10GbE with the possibility of 40GbE down the line
  • Minimal annoyances; things should work they way one would hope/expect. (Not picky about optic EEPROMs, both live and file-based configuration, filters should fail closed instead of causing a route leak)‌‌
  • Easy and integrated automation. (Running commands with an SSH client library and parsing the output with regex doesn’t count!)‌‌
  • No vendor lock in ‌‌- Reasonable pricing‌‌
  • Open source would be nice, but not strictly required

I’m a developer, I live in software. So logically the first thing I did was look into software routing solutions. I already use Debian and BIRD for a few BGP edge routers and it works really well. I have a router with over 2 million routes in the kernel tables that even while doing RPKI validation can route 10G in real world speedtests with reasonable reconvergence times. Great, right? But the issues start to appear once you start adding certain rulesets. BIRD drops invalid routes before they reach the kernel to forward, so even with a huge number of routes being validated (millions) the impact is on a “per route imported” basis and not per packet. On the other hand, ACLs have to be evaluated on every incoming packet. This can put enormous strain on the CPU depending on traffic volume. The control plane and data plane have essentially lost any distinction because with software routing they are effectively combined into one slow process.

One solution to this is XDP, or eXpress Data Path. XDP is a hook for the Berkeley Packet Filter (eBPF) which allows expressive packet processing code to be loaded into either the driver, kernel, or most importantly the NIC itself. eBPF XDP programs are written in C, and as such offer the flexibility of using a full programming language for filter expression. XDP has had some interesting overlay projects, including libkefir and bpfilter (ceased development), but it’s also entierly possible to write your own XDP programs guided by examples from the XDP project. XDP not only speeds up packet processing on standard NICs but also can be executed pre-kernel on “SmartNICs” to be offloaded in hardware ( Netronome being the primary vendor of these cards). Currently a very small number of NICs support xdpoffload, but those that do allow for true line rate routing with a Linux control plane. The end result of XDP programs running offloaded on a supported NIC is that filters are evaluated on the NIC and never even cross the PCIe boundary. All in all, this leads to a very flexible (and therefore scalable) routing platform that checks all my boxes. This is still a huge work in progress, but I’m very excited to see how XDP progresses as a project.