Tech News
← Back to articles

How the U.S. National Science Foundation enabled Software-Defined Networking

read original related products more articles

This article summarizes the story of how SDN arose. So many research projects, papers, companies, and products arose because of SDN that it is impossible to include all of them here. The foresight of NSF in the early 2000s, funding a generation of researchers at just the right time, working closely with the rapidly growing hyperscalers, led quite literally to a transformation—a revolution—in how networks are built today.

The commercial success of SDN drove further interest among academic researchers. The NSF and other government agencies, especially the Defense Advanced Research Project Agency (DARPA), sponsored further research on SDN platforms and use cases that continues to this day. The SDN research community broadened significantly, well beyond computer networking, to include researchers in the neighboring disciplines of programming languages, formal verification, distributed systems, algorithms, security and privacy, and more, all helping lay stronger foundations for future networks.

These two high-profile use cases—multi-tenant virtualization and wide-area traffic engineering—drew significant commercial attention to SDN. Indeed, NSF-funded research led directly to the creation of several successful SDN start-up companies, including Big Switch Networks (open source SDN controllers and management applications, acquired by Arista), Forward Networks (network verification products), Veriflow (developed network verification products, acquired by VMware), and Barefoot Networks (programmable switches, acquired by Intel), to name a few. SDN influenced the large networking vendors, with Cisco, Juniper, Arista, HP, and NEC all creating SDN products. Today, AMD, Nvidia, Intel, and Cisco all sell P4-programmable products, and in 2019 about a third of papers appearing at ACM SIGCOMM were based on P4 or programmable forwarding.

The hyperscalers used SDN to realize two especially important use cases. First, within a single datacenter, cloud providers wanted to virtualize their networks to provide a separate virtual network for each enterprise customer (or “tenant”) with its own IP address space and networking policies. The start-up company Nicira, which emerged from the NSF-funded Ethane project, developed the Network Virtualization Platform (NVP) 26 to meet this need. Nicira was later acquired by VMware and NVP became NSX. Nicira also created Open vSwitch (OVS), 33 an open source virtual switch for Linux, with an OpenFlow interface. OVS grew rapidly and became the key to enabling network virtualization in datacenters around the world. Second, the hyperscalers wanted to control traffic flows across their new private wide-area networks and between their datacenters. Google adopted SDN to control how traffic is routed in its B4 backbone, 23 , 39 using OpenFlow switches, controlled by ONIX, the first distributed controller platform. 27 When Google first described B4 at the Open Network Summit in 2012, it sparked a global surge in research and commercialization of SDN. There were so many papers at ACM SIGCOMM that a separate conference—Hot Topics in Software-Defined Networking (HotSDN, later SOSR) was formed.

SDN adoption by cloud hyperscalers. In parallel with the early academic research on SDN, large technology companies such as Microsoft, Google, Amazon, and Facebook began building large datacenters full of servers that hosted these companies’ popular Internet services and, increasingly, the services of enterprise customers. Datacenter owners grew frustrated with the cost and complexity of the commercially available networking equipment; a typical datacenter switch cost more than $20,000 and a hyperscaler needed about 10,000 switches per site. They decided they could build their own switch box for about $2,000 using off-the-shelf switching chips from companies such as Broadcom and Marvell, and then use their own armies of software developers to create optimized, tailored software using modern software practices. Reducing cost was good, but it was control they wanted and SDN gave them a quick path to get it.

Programmable Open Mobile Internet (POMI) Expedition: In 2008, the NSF POMI Expedition at Stanford expanded funding for SDN, including its use in mobile networks. POMI funded the early development of ONOS, an open source distributed controller, 8 and the widely used Mininet network emulator for teaching SDN and for testing ideas before deploying them in real networks. POMI also funded the first explorations of programmable forwarding planes, setting the stage for the first fully programmable switch chip 10 and the widely used P4 language. 9

Future Internet Design (FIND): In 2007, NSF started the FIND program to support new Internet architectures that could be prototyped and evaluated on the GENI test bed. The FIND program and its successor Future Internet Architecture (FIA) in 2010 expanded the community, working on clean-slate network architectures and fostering alternative designs. The resulting ideas were bold and exciting, including better support for mobility, content delivery, user privacy, secure cloud computing, and more. NSF’s FIND and FIA programs fostered many clean-slate network designs with prototypes and real-world evaluation, many leveraged SDN and improved its foundations. As momentum for clean-slate networking research grew in the U.S., the rest of the world followed suit, such as the EU Future Internet Research and Experimentation (FIRE) program.

Global Environment for Network Innovation (GENI): NSF and researchers wanted to try out new Internet architectures on a nationwide, or global, platform. Computer virtualization was widely used to share a common physical infrastructure, so could we do the same for a network? In 2005, “Overcoming the Internet Impasse through Virtualization” proposed an approach. 5 The next year, NSF created the GENI program, with the goal of creating a shared, programmable national infrastructure for researchers to experiment with alternative Internet architectures at scale. GENI funded early OpenFlow deployments on college campuses, sliced by FlowVisor 35 to allow multiple experimental networks to run alongside each other on the same production network, each managed by their own experimental controller. This, in turn, led to a proliferation of new open source controllers (Beacon, POX, and Floodlight). GENI also led to a programmable virtualized backbone network platform, 6 and an experimental OpenFlow backbone network in Internet2 connecting multiple universities. This led to OpenFlow-enabled switches from Cisco, HP, and NEC. GENI funded the purchase of OpenFlow whitebox switches from ODM manufacturers and the open source software to manage them. NSF funded the NetFPGA project, which enabled experimental OpenFlow switches in Internet2. NSF brought together a community of researchers driven by much more than the desire to create experimental test beds; many researchers came to realize that programmability and virtualization were, in fact, key capabilities needed for future networks. 5 , 16

100×100 project: In 2003, the NSF launched the 100×100 project as part of its Information Technology Research program. The goal of the 100×100 project was to create communication architectures that could provide 100Mb/s networking for all 100 million American homes. The project brought together researchers from Carnegie Mellon, Stanford, Berkeley, and AT&T. One key aspect of the 100×100 project was the design of better ways to manage large networks. This research led to the 4D architecture for logically centralized network control of a distributed data plane 21 (which itself built upon and generalized the routing control platform work at AT&T 15 ), Ethane (a system for logically centralized control of access control in enterprise networks), 11 and OpenFlow (an open interface for installing match-action rules in network switches), 28 as well as the creation of the first open source network controller, NOX. 22

Early NSF-funded SDN research. In 2001, a National Academies report, Looking Over the Fence at Networks: A Neighbor’s View of Networking Research, 30 pointed to the perils of Internet ossification: an inability of networks to change to satisfy new needs. The report highlighted three dimensions of ossification: intellectual (backward compatibility limits creative ideas), infrastructure (it is hard to deploy new ideas into the infrastructure), and system (rigid architecture led to fragile, shoe-horned solutions). In an unprecedented move, the NSF set out to address Internet ossification by investing heavily over the next decade. NSF investments laid the groundwork for SDN. We describe NSF investments here, through the lens of the support we received in our own research groups. Importantly, these and other government-funded research programs fostered a community of researchers that together paved the way for commercial adoption of SDN in the years that followed.

... continue reading