Welcome!

Open Source Cloud Authors: Stackify Blog, Vaibhaw Pandey, Liz McMillan, Pat Romanski, Wesley Coelho

Related Topics: Containers Expo Blog, Open Source Cloud

Containers Expo Blog: Article

Top Five Considerations When Evaluating I/O Virtualization

Delivering on the promise of virtualization

Most commonly, the term "virtualization" refers to the creation of virtual instances of an operating system or virtual machines (VMs) on physical server hardware, known as server virtualization. However, the development of server virtualization has led to the creation of other virtual resource types, including storage, network and I/O resources. Whether a server is physical or virtual, it should be "balanced" with the right amount of resources to achieve its intended operation without waste. Logically, it makes sense to include all the server resources (CPU, memory, storage, I/O, network, etc.) in this bundle of virtual devices. Furthermore, since the virtual server can be placed anywhere, anytime within the available physical infrastructure - improving flexibility, utilization, availability and operations - it stands to reason that the resources it draws from should also be mobile.

As with server virtualization, I/O virtualization can be defined as the logical abstraction of server I/O (including network and SAN connections, direct-attached storage, coprocessor offload, video graphics, etc.) into many virtual resources, allowing for the balanced matching of I/O to physical and virtual servers. The technologies developed to make this happen are being integrated into server hardware and software today, with contributions from numerous vendors and standards bodies throughout the ecosystem. For a good explanation of I/O virtualization, please refer to PCI-SIG I/O virtualization (IOV) specifications for an overview of the underlying technologies and their standards.

For an effective understanding of I/O virtualization, a number of factors should be taken into consideration, including the choice of I/O and associated drivers, the scalability of the solution, compatibility with existing infrastructure and, most important, how the I/O is managed relative to other virtual resources. This can be summarized by the diagram shown in Figure 1.

Any Card
I/O virtualization (IOV) systems, by definition, provide virtual instances of I/O (as defined by type, bandwidth, policy and ID or address) to virtual or physical servers. The most common method of assigning I/O to a server, up to this point, has been to place an I/O card in the server, typically via a PCI Express (PCIe) I/O slot. Ideally an IOV solution should leverage the established ecosystem of cards for servers using the PCIe interface. Anything less severely limits end-user I/O choice, limits application flexibility, and increases the overall cost of a given application solution.

The goal of IOV, whether it is a single card in a single server or a group of cards addressing many servers, is to provide a pool of resources from which a dynamic, flexible infrastructure can draw upon at any time. Diverse applications, differing performance requirements, the use of specialized systems, and even the maintenance of legacy infrastructure dictate a wide range of I/O, certainly more than simple Ethernet and Fibre Channel. An IOV system should provide the means to present any I/O type to any server, physical or virtual.

Data center architects and managers also understand the benefit of careful qualification of solution components (cards, drivers, cables, switches, management software) and the value that choice provides. Proprietary IOV systems and fixed-port switches remove choice and ask IT professionals to change the way they select and qualify a solution, particularly when it comes to I/O. Open, standards-based IOV systems provide the widest choice of I/O and maintain the flexibility that has allowed data center decision makers to control quality, maintain performance, and reduce operational cost.

Native Drivers
Hand in hand with maintaining choice of I/O card, an IOV system should maintain and leverage the investment in host drivers across all operating systems, hypervisors, and cards. This is perhaps the most critical and least appreciated aspect of system performance, reliability, and ease of use in the modern data center. While it's unlikely that all drivers work in all system configurations (with or without IOV), it is reasonable to expect that drivers delivered by the card vendors are capable of working within an IOV configuration and have the support of the I/O vendor in that configuration.

Modification of existing drivers for use in IOV implementations is expected and even preferable to optimize performance, usage models, and reliability, but this should be done with the support of the I/O vendor, preferably by the I/O vendor. This generally means that the I/O being virtualized should look, act, and respond the same way, regardless of whether it is placed in an IOV system or within the server. Again, look for standards-based approaches that leverage existing vendor investment in drivers.

Many Servers
An IOV system and the pool of I/O that it presents should scale and address one or many servers. Single-root I/O virtualization (SR-IOV) was developed to address the virtualization of an I/O card in a single server. This has been a tremendous success and is being adopted throughout the chip-set, OS, card and driver ecosystems. However, it was recognized early on that the value of IOV increases dramatically when expanded from a single server to a group of servers. A standard known as multi-root I/O virtualization (MR-IOV) was proposed to address this opportunity. Unfortunately, for a variety of reasons not the least of which are limitations on how a MR-IOV system interconnects multiple servers and scales beyond a small cluster, MR-IOV was not adopted and is unlikely to become the preferred solution.

Application requirements, space, power and budget all dictate server selection, which in turn determine rack and aisle configuration. The choices are unlimited, and so is the reality of the modern data center. There is no typical configuration, nor is there a formula or form factor that satisfies all customers and applications. Whether it is a high-density blade server solution or a flexible, expandable rack-mount server solution, the IOV solution must flex and scale with the customer's needs. It's the same with I/O type. For certain circumstances, it is the raw performance of the I/O that matters. Here the cost and management gains of higher utilization rates under IOV must be balanced against the performance requirements of the application. In these cases, the I/O should be closely coupled to the servers and closely managed to the specific bandwidth, latency and policy requirements of the application. The I/O resource pool must be easily expandable without affecting server or I/O operation, and the server must have direct access to the I/O pool to maximize performance.

In other cases, the I/O is not a performance bottleneck and higher utilization rates can be achieved. Perhaps the I/O is time-shared by a large number of servers, each for a limited amount of time, as is often the case with backup operations. Or perhaps the I/O is required by only a small number of VMs at a time, but those VMs can move and reside on any number of physical servers. In this case, it may be more efficient to share the I/O resource pool across a large number of servers using an appropriate level of subscription.

In either circumstance, the IOV system should scale from one to dozens or even hundreds of servers.

Existing Networks
I/O Virtualization systems are not intended to replace the function of network access switches, nor should they mandate a change in how the network or SAN is configured and operated. Rather, IOV systems should be viewed as an extension of the server system, complimenting rather than replacing existing network infrastructure. As mentioned previously, there are two basic models of IOV (direct server-attachment and network-attachment) that determine where in the data center architecture the IOV system resides. Ideally, a given IOV system should accommodate both attachment models.

As the name suggests, server-attached IOV is a direct extension of the server system providing an "on-ramp" to existing network and storage infrastructure. In this case, the IOV system provides the very same I/O that would be placed directly in the server (Fibre Channel HBAs, iSCSI initiators, network interfaces, etc.), and thus connects and interoperates with network and storage infrastructure the same way as before. In this case, the benefit that IOV provides is to increase I/O utilization rates, reducing the number of physical I/O ports needed and therefore reducing the number of physical switch ports needed. While this may allow the use of more cost-effective network and SAN gear from a chosen networking vendor, it's just as likely that IOV is used to extend the utilization and life of existing infrastructure.

IOV systems can also be provisioned as network-attached resources, much like storage or a security appliance can be a network resource. Connecting I/O to servers via a network may seem backwards, but once you consider that I/O is a virtual resource for virtual machines, it stands to reason that I/O can be treated as a pool of resources accessible to VMs via the Ethernet connections available to all VMs. Another way to think about this is to consider the emerging Fibre Channel over Ethernet (FCoE) standards utilizing Ethernet as the common transport. Here, both FC and Ethernet traffic is carried over the common Ethernet network to an FCoE switch that then provides both Ethernet and FC ports from the network. In much the same way other I/O resources (SAS, offload cards, co-processors, flash, etc.) can also be provided to groups of servers via a network. Ideally an IOV solution should provide generalized I/O access over a common transport network (e.g., Ethernet) using standards-based protocols (e.g., PCI Express) and whatever network and storage infrastructure that is used or preferred.

Management Integration
Traditionally I/O has been tightly coupled to a server and thus tightly managed as part of the server system. Mostly this meant little more than inserting a card, installing a driver, and monitoring status and performance via SNMP or similar mechanisms. Virtualization has fundamentally changed how I/O is provisioned, monitored, and managed. No longer is I/O simply a card that is physically installed and treated as a static block of bandwidth that may or may not be utilized, but left running until failure or obsolescence. The load generated by many virtual servers is now highly variable and dynamic. A single physical network connection can now be shared by multiple VMs, all of which are mobile and dynamic themselves, all carrying different and varying traffic types and loads. Therefore I/O must be:

  • Provisioned on-demand
  • Monitored for status and performance
  • Actively managed to an ever changing set of rules

The new mantra for the data center manager must become MAKE, MONITOR, MANAGE, and REPEAT.

Of course, the means by which the I/O is managed should be consistent with the practices and processes already established for virtualization management. Any IOV system should provide a plug-in interface to the most common virtualization management tools. For the wide variety of other management platforms and practices, an API and scriptable CLI must be made available.

Delivering on the Promise of Virtualization
Data center managers require technologies that dynamically provision larger, more varied workloads using many VMs spread across larger pools of physical infrastructure. In the near future, these professionals must give consideration to solutions that automate and manage the balance and collection of CPU, memory, and I/O resource pools for allocation to VMs and applications. Ideally, these solutions should fit within the existing infrastructure, leverage the existing ecosystem of hardware and software, provide scale and choice of performance vs efficiency or utilization, and dynamically support VMs, regardless of size or location. I/O virtualization systems, done right, will provide companies with cost-efficient, flexible infrastructure, fully delivering on the benefits that virtualization promises.

More Stories By Craig Thompson

Craig Thompson is Vice President, Product Marketing at Aprius, where he brings diverse experience in corporate management, product marketing and engineering management. He has held various senior marketing roles in data communications, telecommunications and broadcast video, most recently with Gennum Corporation, based in Toronto, Canada. Prior to that he was Director of Marketing for Intel's Optical Platform Division where he was responsible for the successful ramp of 10Gb/s MSAs into the telecommunications market. Craig holds a Bachelor of Engineering with Honors from the University of New South Wales in Sydney, Australia, and an MBA from the Massachusetts Institute of Technology.

@ThingsExpo Stories
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...
WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, introduced two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a multip...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
An increasing number of companies are creating products that combine data with analytical capabilities. Running interactive queries on Big Data requires complex architectures to store and query data effectively, typically involving data streams, an choosing efficient file format/database and multiple independent systems that are tied together through custom-engineered pipelines. In his session at @BigDataExpo at @ThingsExpo, Tomer Levi, a senior software engineer at Intel’s Advanced Analytics gr...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things’). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing? IoT is not about the devices, it’s about the data consumed and generated. The devices are tools, mechanisms, conduits. In his session at Internet of Things at Cloud Expo | DXWor...