Networking technology is not static. It is constantly evolving, and yet many times when organizations refresh or 'modernize' their network, they purchase shiny new network switches but insist on deploying the same networking protocols that operated on the previous network. Additionally, many of these protocols were ratified in the late 1980s and early 1990s. There are multiple reasons for this. The first and most obvious reason is that many people will generally work with the technologies they already know. Familiarity lends to a comfort factor because the protocols and practices are recognized and understood. The second factor is that newer protocol architectures are necessarily reasonably obscure. New protocol architectures must have a starting point for implementation, and it begins in the single digits. This obscurity results in a lack of awareness of the newer architectures and their details, benefits, and potential shortcomings. This too is understandable. It takes time for new technologies to become mainstream and often they fail to do so. The third factor is strongly related to the first two. The risk factor. The IT staff are often staking their jobs or even career stability on the success of the networking projects that they drive. As a result, decision-makers take an understandably conservative approach when embracing of new cutting edge technology.

I am not a young man. I bring this up because I have a perspective that goes back well before the Internet existed. I grew up at a time of wall-mounted dial phones with ridiculously long cords that always knotted up. There were also party lines. I can still remember trying to call one of my friends, and one of our neighbors would be on the line: "Oh Eddie honey, we're going to be a little while." I would then get on my bike and ride over to my friends because I understood what "a little while" actually meant. I also remember Ethernet in its original form. That is how I began my career: By then, telephone systems were digital, and I began working with both thinnet and thicknet Ethernet topologies. I also had the opportunity to run several testbeds for 10BaseT. The rest is history. I have been a protocol architect ever since, and I have watched Ethernet mature into the genuinely stunning networking technology that it is today.

While Ethernet today is ubiquitous, it was not always so. When I began my career, there were several different 'data link' protocols, such as token ring, FDDI, and ATM. The important point is that despite using multiple data link protocols throughout the 1990s, the industry quickly gravitated to the singular use of Internet Protocol (IP) at the networking layer. Except for Ethernet, the previously mentioned data link protocols are now deprecated and no longer in use. As the saying goes, "Ethernet won the protocol war". By the end of the 1990s, almost all networks were deployed as Ethernet and IP.

This is where our story begins. We will start with the dichromatic model of Ethernet and IP and make quickly analyze how different interpretations of networking fabric technologies have evolved over time and exactly what they attempt to accomplish. Let's start with Figure 1, which displays the 'traditional' layered protocol architecture.

Figure 1. The 'traditional' layered protocol architecture

This is the tried-and-true model that we have all come to know. It is familiar and well understood; hence it is the model most people reach for in implementation. But there are a few things to note about this model. The first is that there is a top-down dependency, which means that upper-level protocols depend on the functionality of the underlying protocols. This also means that lower-level protocol failures can cause whole-scale network outages. Examples are spanning-tree loops at the Ethernet layer or route flapping at the IP routing layer. The second point is that each of these protocols are discrete both in control and functionality. This means that the OSPF routing protocol is totally unaware of Ethernet. Protocol Independent Multicast (PIM) is unaware of underlying OSPF. VLANs are unaware of the underlying Ethernet. Therefore, these protocols need to be coordinated and provisioned into a service delivery chain. Due to the high-touch environment, major service or topology changes often require a service outage, which typically occurs in the wee hours of the morning. We've all been there. But the reality is that we require the use of IP based protocol overlays because the traditional Ethernet (IEEE 802.3) has no sense of a path. Traditional Ethernet is a flood and learn technology. These flooding domains are divided by 802.1Q Virtual Local Area Networks (VLANs), and loops are prevented by the 802.1d Spanning Tree Protocol (STP).

But this is not the best way to view the model. MultiProtocol Label Switching (MPLS) is a networking fabric, meaning that it has service signaling, control, and data plane functions as shown in Figure 2.

Figure 2. The MPLS Backbone fabric

This model becomes a bit more complex, and we need to cover a few important points before moving on. One should note that the signaling plane contains a complex set of different protocols, each of which provides for specific service options. Herein lies the complexity of MPLS as a network architecture. We should also note the existence of two control planes, one for MPLS and one for the underlying IP networking layer. Together these coordinate the MPLS data plane, which is shown on the left-hand side of Figure 2, which also depicts eloquent, direct labeling onto the Ethernet data plane. As a result, we see two data planes, one for the MPLS labeling, resulting from the service signaling and control planes displayed on the right-hand side of Figure 2. Then there is the Ethernet forwarding plane. This sets a precedent that we want to take note of and will return to later in this article: the notion of labeling directly to the Ethernet transport.

The thrill of TRILL - eh maybe not

TRILL stands for 'Transparent Resilient Interconnection of Lots of Links'. Really. Not kidding. TRILL is a strict layer 2 fabric technology based on meshed VLANs. Layer 3 and above services are obtained by overlaying the required protocols such as OSPF and PIM in a layered fashion. TRILL provides minimal improvement from the traditional model, and scaling became an issue in larger topologies. There are still quite a few TRILL fabrics out there, but the protocol model has reached a dead-end from an evolutionary perspective.

Similarly, a series of fabric architectures were developed that utilized the IP network as the underlay topology. In this model, the fabric was an overlay on top of the IP network. In most instances, this overlay is accomplished using Virtual Extensible LAN (VXLAN), which was developed by VMWare for logical data center interconnect. The original idea was to create tunnels between data centers over the IP network for virtual machine migrations. The concept of IP Fabrics builds on this concept to establish literal campus topologies. This model has obvious benefits, and it can be overlayed onto an existing network infrastructure. However, note that there are multiple interpretations of what an IP fabric is, and the components involved. This is clearly shown in Figure 3.

Figure 3. A comparison of different IP fabric approaches

The left-hand side of Figure 3 depicts Cisco's IP Fabric technology. Note that the signaling and fabric control plane is provided by Locator/ID Separation Protocol (LISP), an obscure protocol that has been around for quite some time but with no real purpose. Note that there is a secondary control plane for the IP network to establish the topology for LISP behavior. The right-hand side of Figure 3 illustrates the 'industry standard' IP Fabric based on Border Gateway Protocol Ethernet Virtual Private Network (BGP EVPN). I want to emphasize that this is a euphemism. There is no 'standard' IP fabric in that you cannot expect vendor A and vendor B solutions to work and function together. Each vendor has subtleties in the network signaling layer that prevents interoperability, and it goes to logic that Cisco IP Fabric is an actual island unto itself.

So, while there are benefits to IP fabrics with an overlay on top of an existing IP network, there are still complexities. These IP fabrics have specific dictates on topology and infrastructure making them less straightforward than initially assumed. IP fabrics also are 'proprietary' for the general sense of the term. There is no real interoperability from a multi-vendor perspective. In addition, all the network protocol architectures that I have covered thus far can be 'enumerated.' Therefore, the network topology and even discrete service overlays can be discovered and mapped out by determined cyber attackers. The reason for this is the continued use of IP to establish the network topology and the service delivery underlay. Also, notice that the concept of layered dependence in the architecture model remains, albeit simplified. The Ethernet data plane is simply a transport for the overlying protocol activity. If anything, we are further away from that very attractive concept of labeling directly over the Ethernet transport, as was the case with the MPLS data plane. It seems that we have taken an 'ox bow' in the development path. How does the dichotomy get broken? How does a new paradigm evolve? How do we liberate IP?

The liberation of IP

Early in 2004, I was called into a series of meetings about a unique project. I want to note that I was quite busy at the time. I was working on several IPv6 projects and a patent for session border controller (SBC) technology for VoIP. Fortunately, I became intrigued with the project and decided to take it on. My primary task was in helping to create a method for broadcast and scoped multicast and, if possible, true IP multicast without the use of IP. We aimed to create a pure Ethernet Fabric where IP played no role in establishing the network topology or the service delivery. In 2010, we completed the delivery of true IP multicast topologies without the use of IP or PIM. In 2011, we released the solution as a pre-standard implementation. As an important point, these early implementors are still running the same technology to this day.

In the spring of 2012, IEEE 802.1aq was standardized. (It has since been wrapped into the 802.1Q standard as of 2014). The protocol architecture model is illustrated in Figure 4. Note that there is an immense streamlining of the protocol model and the atomization and removal of dependencies to the services delivery model. The model consists of the Ethernet transport where the labeling is again direct over the transport using Mac-in-Mac encapsulation (IEEE 802.1ah). This provides the basic tunnel construct for the fabric. Then there are Individual Service Identifiers or I-SID's, which provide for the service context and path behaviors of the tunnels. These can be layer 2, layer 3, multicast or even IPv6.

Figure 4. IEEE 802.1aq/Q 2014 Ethernet fabric - Shortest Path Bridging

The result is a service delivery mechanism that is more than equivalent to the other fabric models but does so without the use of IP. Instead, IP becomes completely virtualized as a network service. The network topology and service delivery mechanisms are based purely on Ethernet. The granularity of secure services delivery is based on MAC-in-MAC tunneling and the Individual Service Identifiers. Access policies at the edge provide the trigger for the service chain. As a result, automation is embedded or implicit in this operation. 802.1aq represents a huge leap in the evolution of protocol architecture. Especially when you contrast to the traditional protocol model that relies on explicit provisioning or scripting is required to build out and scale service delivery in an automated fashion.

In the protocol model, Intermediate System to Intermediate System (IS-IS) is invoked as a control plane for the MAC-in-MAC/I-SID tunneling behavior. This implementation is defined in IETF RFC 6329. As a result of this evolution, Ethernet now has a very integral sense of end-to-end path behavior. Additionally, service profiles become horizontally independent, moving away from the vertically dependent service profile overlay models.

Another significant benefit of this evolution is that since IP is no longer a utility for topology or service delivery, the network becomes 'dark' to would-be attackers seeking to enumerate the network for attack surface scoping and discovery. This trait creates a very difficult network to enumerate when combined with MAC-in-MAC encapsulation, and lateral movement becomes almost impossible. The security posture is enhanced as the network topology and discrete service overlays are effectively hidden from determined cyber attackers.

Figure 5. A comparison of the 'traditional' layered model to IEEE 802.1aq/Q 2014

Figure 5 highlights the key points to the protocol evolution. The first is the streamlining and simplifying of the data, control, and signaling planes. Second, there is an atomization of services with horizontal independence. In other words, each service is totally independent of any other service within the network and is not reliant on underlying service infrastructure or protocol constructs. The result is the simplification the network architecture as well any provisioning requirements. Automation becomes much more straightforward and scalable as a result.

In summary, this evolution of network protocol architecture has resulted in several innovations and benefits:

  • IP free network topology and service delivery (stealth networking)
  • Ethernet data plane segmentation and tunneling
  • Streamlining of the protocol model into a single control and data plane
  • Atomization of network services leading to an increased capability for automation
  • Implicit automation within the architecture and the option for explicit automation workflows

An old saying used in the engineering community also applies to network protocol architectures. The adage states, "Simplicity scales, ­­complexity fails." The overall evolution of network protocol architectures will, of course, continue. 802.1aq is by no means the end of the evolution. As we move further towards smart infrastructure and communities, this evolution will continue to provide the automation and scale required for networks. Newer services and features will continue to emerge for both scale and granularity. An uptick in automation will also be enhanced or even based upon machine learning (ML) and artificial intelligence (AI). Indeed, there is a long evolutionary path ahead of us.

Attachments

  • Original Link
  • Original Document
  • Permalink

Disclaimer

Extreme Networks Inc. published this content on 20 January 2022 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 20 January 2022 21:31:08 UTC.