Open Source Cloud Authors: Liz McMillan, Jason Bloomberg, Yeshim Deniz, Stackify Blog, Vaibhaw Pandey

Related Topics: Containers Expo Blog, Microservices Expo, Microsoft Cloud, Open Source Cloud, @CloudExpo, SDN Journal

Containers Expo Blog: Blog Post

Managing Tables in Our New Virtual Reality

Networking really comes down to the art of managing tables and rules.

In traditional networks, MAC addresses are inserted into tables using standard learning techniques.  When packets arrive, if the source MAC address is not known, it is added to the MAC forwarding table for that VLAN with the ingress interface as its destination. If the destination is unknown, the packet is flooded through the VLAN, with the side effect that each switch along the way inserts the source MAC address in its own forwarding table for that VLAN. Assuming the destination actually exists, one of the flooded copies will reach its destination. The device at the destination MAC address receives the packet, and (hopefully) responds. The response is destined for the device that sent the original packet, for which each switch has learned how to get to from the flooded packet. The packet makes it way back to the original source with the side effect of the source of this response packet being learned and inserted into the forwarding table along the switch path back to the original sender. Sounds complicated, but its basic MAC learning and this is how ethernet networks have found sources and destinations for a long long time.


IP addresses are learning slightly differently. ARP is used to create a mapping between a MAC address and an IP address. When a device wants to send an IP packet to a device on the same subnet, it will send out an ARP request for the destination device, and that device (or someone else on its behalf in the case of proxy ARP) will respond with an response that provides the mapping between MAC address and IP address. When the IP packet is destined for another subnet, the source will pass the packet to gateway for the destination subnet, using ARP the exact same way to get the MAC to IP address mapping. That gateway is determined by yet another table, the IP routing table, containing IP subnets and a pointer to the IP address of the device that can get you there. The latter is built using static entries configured by the administrator, or routing protocols like OSPF, ISIS or BGP.

So far so good. We have 3 tables to maintain: the MAC table (also known as the L2 forwarding table), the ARP table (also know as the IP host table) and the IP routing table.

When a network is divided into multiple virtual networks, each of these tables could be split into multiple versions, one for each virtual network. As an example I may have 10 separate L2 forwarding tables, each containing many MAC addresses in many VLANs. This immediately brings us to the first challenge in managing these tables. If I receive an ethernet packet, which of multiple tables do I use to lookup the destination, or similarly, in which table do I insert the source MAC address I just learned? It is clear that a switch must know to which virtual network this packet belongs before it attempts to use its L2 forwarding table. Similarly, by learning then source of this packet, I need to know which of multiple tables to insert its address into.

There are several ways by which to associate a packet with a forwarding table, or really with a Virtual Network. The most basic and probably most used is a static mapping of the combination of ingress port (on the switch) and VLAN. The administrator has created a table that simply says "any packet coming in on this port on this VLAN belongs to Virtual Network X". Virtual Network X is now associated with one of the forwarding tables and we have found the table we are dealing with. We can learn source and put them in the right table and we can lookup the destination. When the destination is not present in that table, we have our next challenge: how do we flood in a Virtualized Network? We would normally send the packet out every port that this VLAN configured (along an STP or otherwise managed loop free path), but we want to reach only those switches that have this Virtual Network configured (statically or dynamically).

This is where different solutions take different approaches. In Shortest Path Bridging for instance, the set of switches that have member ports in a specific Virtual Network (I-SID in SPB terms) are discovered using ISIS. As part of that discovery, a SPF calculated tree is created covering all these switches, and the packet is flooded along this tree, very similar to normal VLAN flooding. Because SPB traffic is encapsulated, only the edge switches decapsulate this packet and learn the original source.

Overlay networks like VXLAN solve the problem in a very similar way in the pure definition of the protocol. When a packet is destined for an unknown destination, it is "flooded" to all other VXLAN endpoints that have members for that Virtual Network (VNI in the case of VXLAN). Because VXLAN runs on top of IP, its version of flooding needs an IP based mechanism, and the mechanism of choice is IP Multicast. Each VNI is represented by an IP multicast group, and all VXLAN endpoints (VTEPs) join this group. When a packet needs to be flooded, it is multicast on that specific group, the receiving VTEPs decapsulate the packet, learn the source and all is good.

There have been many articles and opinions on the use of IP Multicast for flooding (which is essentially the same as multicasting or broadcasting) in VXLAN. One of VXLANs strengths is that it can travel across any IP infrastructure, including the largest of them all, the Internet. However, ubiquitous IP connectivity is nowhere near the same as ubiquitous IP Multicast connectivity. And this is why most controller (distributed or central) overlay solutions have attacked that problem. And this is also where it gets complicated.

A first benefit of having a controller that manages the overlay network is simple: you have a complete inventory of all overlay endpoints that exist in the network. You probably even have an inventory of which Virtual Networks each serves, because all of this is provisioned data. This means I don't have to discover all the endpoints a packet needs to be flooded to, I know them all, I can simply replicate the packet to each and every end point as a unicast packet. Current implementations of the controller based virtualization solutions use this. The advantage is that it is really simple. The disadvantage, its a lot of overhead when you have many endpoints.

When you think through the creation of overlay networks and how VMs are created and attached to Virtual Switches and attached to Virtual Networks, you quickly realize that all of this is provisioned information through the overlay and VM orchestration system. Which raises the question, why attempt to dynamically learn at all? If I know exactly where a VM is (using VM as a equivalent of a MAC and IP address here), which VTEP it is hiding behind, and which Virtual Network it is part of, why can I not simply tell all the other VTEPs about this from the controller? All provisioned information could be exchanged outside of the normal inline learning mechanisms, so mechanisms like flooding and even ARP are greatly reduced or even completely removed in such networks. All information is known and the controller pro-actively pushes this information to those that need to know.

It is a different way of solving some of the more challenging (but basic and fundamental) network behaviors, but one that makes complete sense. It does raise many scaling questions, we have taken methods that have traditionally been distributed and turned it into centralized table management. And whether the controller runs distributed, clustered or as a single entity, it is still a centrally managed entity. The next little while will tell us whether the scale and performance are sufficient for the networks we intend to build.

This however does not mean that there is no need for dynamic learning in an overlay network. Any network will have devices that are outside of the control of the overlay controller. These devices need to be discovered and learned somehow. That is the work of VXLan gateways and Service Nodes in NSX. And those create a completely new challenge. Less so of functionality and far more of control. The ultimate challenge is less how they are managed, that is "just" engineering work. The real challenge is who manages the tables.

[Today's fun fact: The plastic things on the end of shoelaces are called "aglets". And I guarantee you won't remember that by tomorrow]

The post Managing Tables in our new Virtual Reality appeared first on Plexxi.


Read the original blog entry...

More Stories By Marten Terpstra

Marten Terpstra is a Product Management Director at Plexxi Inc. Marten has extensive knowledge of the architecture, design, deployment and management of enterprise and carrier networks.

@ThingsExpo Stories
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
The IoT Will Grow: In what might be the most obvious prediction of the decade, the IoT will continue to expand next year, with more and more devices coming online every single day. What isn’t so obvious about this prediction: where that growth will occur. The retail, healthcare, and industrial/supply chain industries will likely see the greatest growth. Forrester Research has predicted the IoT will become “the backbone” of customer value as it continues to grow. It is no surprise that retail is ...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expe...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER give you detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPO also offers s...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.