This page contains a Flash digital edition of a book.
ANALYSIS OPINION


@fibresystemsmag | www.fibre-systems.com


Achieving carrier-grade software defined networks


Geoff Bennett, of Infinera describes the three elements needed to make optical data planes programmable


least because it really does offer a way to remove the domination of the Ethernet switch and core router markets by the established players and open up new possibilities for network functions virtualisation for network operators.


W


What is SDN? Te conventional (pre-SDN) approach to building networks is to interconnect switches or routers that will forward packets to their destination through the network infrastructure plane. Te infrastructure plane pathways are set up using distributed control plane protocols, such as OSPF in the IP world, or the GMPLS family of protocols in the transport network world. Historically the philosophy for distributing control plane functions has been to make the network more scalable (by dividing up the work), and more autonomous in the face of network failures because each node knows enough about the network to route traffic around failed links. Te problem with a distributed approach is


that these control planes need processing power to run them, and are normally implemented inside of a proprietary vendor operating system so that users cannot implement new features or fix bugs by themselves. If the network is made up of multiple technology types (eg. IP, OTN, DWDM), and multiple vendors, it may be difficult to use a single control plane to provision end-to-end connections through all of the layers. Enter the idea of the SDN, a simple view of


which is shown in Figure 1. In this architecture the control plane is removed from the network elements themselves, and is located in a “controller”. A protocol is implemented between the controller and the network


14 FIBRE SYSTEMS Issue 5 • Autumn 2014


riting anything about soſtware defined networking (SDN) is guaranteed to get attention these days. It’s a hot topic – not


elements, which exposes an API to the controller. OpenFlow is one example of a ‘controller to data plane interface’ (CDPI) that can be used to create an SDN; but more recent examples include the IETF NETCONF protocol that uses a YANG data model (and is oſten referred to as NETCONF/YANG). Te other key feature of SDN is that it’s


intended to be ‘open’. Te controller itself should implement an open API, this time called the northbound interface (NBI), to allow anybody approved for access to the controller to create and maintain network applications. Examples of such applications could include a multi-layer topology abstraction app, a multi-layer path optimisation app, and a multi-layer provisioning app, also shown on Figure 1. Te controller and its associated SDN applications are made scalable using Web 2.0 techniques for network functions virtualisation (NFV). So the controller is no longer just a box, but is a function of the network that can itself be virtualised for scalability and resilience. I’ve shown this in simple terms as multiple boxes, but in theory the controller could be distributed over multiple data centres.


Why use SDN? Most people who are following the topic are aware that SDN architectures are now commercially viable for university research networks (where SDN was born), for enterprise networks, and for cloud data centres. Each of these application areas have different motivations for using a centralised SDN architecture. For example: university researchers love the open aspect of the switches to allow them to do research; enterprise users love the fact that they can buy low-cost white label switches that allow them to break free of the price premiums charged by big brand vendors; and data centre operators like the deterministic way that they can manage data flows and over-ride network control plane operations to help them achieve NFV. More recently, service providers have been


looking at a ‘carrier SDN’ concept, and one of the main drivers behind this is to allow them to


Geoff Bennett


provision services through the multiple technology and vendor layers that may comprise a typical transport network. As I mentioned above, this can be difficult to achieve with the current instances of distributed control planes. For the packet layers – such as Ethernet, IP


and MPLS – there are no substantial technology barriers to an SDN architecture. Likewise for the OTN digital bandwidth management layer that routers typically plug into, there is no real barrier to dynamic service creation. But in the DWDM world there’s a problem.


Figure 2 shows how traffic from the upper two pairs of routers is carried over two 100G transponders over the long-distance fibre (shown by the solid red and green lines). A third router is installed, but the dotted line is where the third 100G transponder will be – once it is installed by the service engineers. Tis is the fundamental barrier to making the optical layer ‘programmable’. Capacity is not available until an engineer goes out in a truck and installs it.


Super-channels Tis problem was highlighted by Neil McRae, BT’s chief network architect, at the recent WDM & Next Generation Networks conference in Nice, France. He pointed out that ‘a brick that


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45