Open Source Cloud Authors: Liz McMillan, Pat Romanski, Elizabeth White, Yeshim Deniz, Zakia Bouachraoui

Related Topics: Containers Expo Blog, Open Source Cloud

Containers Expo Blog: Article

Top Five Considerations When Evaluating I/O Virtualization

Delivering on the promise of virtualization

Most commonly, the term "virtualization" refers to the creation of virtual instances of an operating system or virtual machines (VMs) on physical server hardware, known as server virtualization. However, the development of server virtualization has led to the creation of other virtual resource types, including storage, network and I/O resources. Whether a server is physical or virtual, it should be "balanced" with the right amount of resources to achieve its intended operation without waste. Logically, it makes sense to include all the server resources (CPU, memory, storage, I/O, network, etc.) in this bundle of virtual devices. Furthermore, since the virtual server can be placed anywhere, anytime within the available physical infrastructure - improving flexibility, utilization, availability and operations - it stands to reason that the resources it draws from should also be mobile.

As with server virtualization, I/O virtualization can be defined as the logical abstraction of server I/O (including network and SAN connections, direct-attached storage, coprocessor offload, video graphics, etc.) into many virtual resources, allowing for the balanced matching of I/O to physical and virtual servers. The technologies developed to make this happen are being integrated into server hardware and software today, with contributions from numerous vendors and standards bodies throughout the ecosystem. For a good explanation of I/O virtualization, please refer to PCI-SIG I/O virtualization (IOV) specifications for an overview of the underlying technologies and their standards.

For an effective understanding of I/O virtualization, a number of factors should be taken into consideration, including the choice of I/O and associated drivers, the scalability of the solution, compatibility with existing infrastructure and, most important, how the I/O is managed relative to other virtual resources. This can be summarized by the diagram shown in Figure 1.

Any Card
I/O virtualization (IOV) systems, by definition, provide virtual instances of I/O (as defined by type, bandwidth, policy and ID or address) to virtual or physical servers. The most common method of assigning I/O to a server, up to this point, has been to place an I/O card in the server, typically via a PCI Express (PCIe) I/O slot. Ideally an IOV solution should leverage the established ecosystem of cards for servers using the PCIe interface. Anything less severely limits end-user I/O choice, limits application flexibility, and increases the overall cost of a given application solution.

The goal of IOV, whether it is a single card in a single server or a group of cards addressing many servers, is to provide a pool of resources from which a dynamic, flexible infrastructure can draw upon at any time. Diverse applications, differing performance requirements, the use of specialized systems, and even the maintenance of legacy infrastructure dictate a wide range of I/O, certainly more than simple Ethernet and Fibre Channel. An IOV system should provide the means to present any I/O type to any server, physical or virtual.

Data center architects and managers also understand the benefit of careful qualification of solution components (cards, drivers, cables, switches, management software) and the value that choice provides. Proprietary IOV systems and fixed-port switches remove choice and ask IT professionals to change the way they select and qualify a solution, particularly when it comes to I/O. Open, standards-based IOV systems provide the widest choice of I/O and maintain the flexibility that has allowed data center decision makers to control quality, maintain performance, and reduce operational cost.

Native Drivers
Hand in hand with maintaining choice of I/O card, an IOV system should maintain and leverage the investment in host drivers across all operating systems, hypervisors, and cards. This is perhaps the most critical and least appreciated aspect of system performance, reliability, and ease of use in the modern data center. While it's unlikely that all drivers work in all system configurations (with or without IOV), it is reasonable to expect that drivers delivered by the card vendors are capable of working within an IOV configuration and have the support of the I/O vendor in that configuration.

Modification of existing drivers for use in IOV implementations is expected and even preferable to optimize performance, usage models, and reliability, but this should be done with the support of the I/O vendor, preferably by the I/O vendor. This generally means that the I/O being virtualized should look, act, and respond the same way, regardless of whether it is placed in an IOV system or within the server. Again, look for standards-based approaches that leverage existing vendor investment in drivers.

Many Servers
An IOV system and the pool of I/O that it presents should scale and address one or many servers. Single-root I/O virtualization (SR-IOV) was developed to address the virtualization of an I/O card in a single server. This has been a tremendous success and is being adopted throughout the chip-set, OS, card and driver ecosystems. However, it was recognized early on that the value of IOV increases dramatically when expanded from a single server to a group of servers. A standard known as multi-root I/O virtualization (MR-IOV) was proposed to address this opportunity. Unfortunately, for a variety of reasons not the least of which are limitations on how a MR-IOV system interconnects multiple servers and scales beyond a small cluster, MR-IOV was not adopted and is unlikely to become the preferred solution.

Application requirements, space, power and budget all dictate server selection, which in turn determine rack and aisle configuration. The choices are unlimited, and so is the reality of the modern data center. There is no typical configuration, nor is there a formula or form factor that satisfies all customers and applications. Whether it is a high-density blade server solution or a flexible, expandable rack-mount server solution, the IOV solution must flex and scale with the customer's needs. It's the same with I/O type. For certain circumstances, it is the raw performance of the I/O that matters. Here the cost and management gains of higher utilization rates under IOV must be balanced against the performance requirements of the application. In these cases, the I/O should be closely coupled to the servers and closely managed to the specific bandwidth, latency and policy requirements of the application. The I/O resource pool must be easily expandable without affecting server or I/O operation, and the server must have direct access to the I/O pool to maximize performance.

In other cases, the I/O is not a performance bottleneck and higher utilization rates can be achieved. Perhaps the I/O is time-shared by a large number of servers, each for a limited amount of time, as is often the case with backup operations. Or perhaps the I/O is required by only a small number of VMs at a time, but those VMs can move and reside on any number of physical servers. In this case, it may be more efficient to share the I/O resource pool across a large number of servers using an appropriate level of subscription.

In either circumstance, the IOV system should scale from one to dozens or even hundreds of servers.

Existing Networks
I/O Virtualization systems are not intended to replace the function of network access switches, nor should they mandate a change in how the network or SAN is configured and operated. Rather, IOV systems should be viewed as an extension of the server system, complimenting rather than replacing existing network infrastructure. As mentioned previously, there are two basic models of IOV (direct server-attachment and network-attachment) that determine where in the data center architecture the IOV system resides. Ideally, a given IOV system should accommodate both attachment models.

As the name suggests, server-attached IOV is a direct extension of the server system providing an "on-ramp" to existing network and storage infrastructure. In this case, the IOV system provides the very same I/O that would be placed directly in the server (Fibre Channel HBAs, iSCSI initiators, network interfaces, etc.), and thus connects and interoperates with network and storage infrastructure the same way as before. In this case, the benefit that IOV provides is to increase I/O utilization rates, reducing the number of physical I/O ports needed and therefore reducing the number of physical switch ports needed. While this may allow the use of more cost-effective network and SAN gear from a chosen networking vendor, it's just as likely that IOV is used to extend the utilization and life of existing infrastructure.

IOV systems can also be provisioned as network-attached resources, much like storage or a security appliance can be a network resource. Connecting I/O to servers via a network may seem backwards, but once you consider that I/O is a virtual resource for virtual machines, it stands to reason that I/O can be treated as a pool of resources accessible to VMs via the Ethernet connections available to all VMs. Another way to think about this is to consider the emerging Fibre Channel over Ethernet (FCoE) standards utilizing Ethernet as the common transport. Here, both FC and Ethernet traffic is carried over the common Ethernet network to an FCoE switch that then provides both Ethernet and FC ports from the network. In much the same way other I/O resources (SAS, offload cards, co-processors, flash, etc.) can also be provided to groups of servers via a network. Ideally an IOV solution should provide generalized I/O access over a common transport network (e.g., Ethernet) using standards-based protocols (e.g., PCI Express) and whatever network and storage infrastructure that is used or preferred.

Management Integration
Traditionally I/O has been tightly coupled to a server and thus tightly managed as part of the server system. Mostly this meant little more than inserting a card, installing a driver, and monitoring status and performance via SNMP or similar mechanisms. Virtualization has fundamentally changed how I/O is provisioned, monitored, and managed. No longer is I/O simply a card that is physically installed and treated as a static block of bandwidth that may or may not be utilized, but left running until failure or obsolescence. The load generated by many virtual servers is now highly variable and dynamic. A single physical network connection can now be shared by multiple VMs, all of which are mobile and dynamic themselves, all carrying different and varying traffic types and loads. Therefore I/O must be:

  • Provisioned on-demand
  • Monitored for status and performance
  • Actively managed to an ever changing set of rules

The new mantra for the data center manager must become MAKE, MONITOR, MANAGE, and REPEAT.

Of course, the means by which the I/O is managed should be consistent with the practices and processes already established for virtualization management. Any IOV system should provide a plug-in interface to the most common virtualization management tools. For the wide variety of other management platforms and practices, an API and scriptable CLI must be made available.

Delivering on the Promise of Virtualization
Data center managers require technologies that dynamically provision larger, more varied workloads using many VMs spread across larger pools of physical infrastructure. In the near future, these professionals must give consideration to solutions that automate and manage the balance and collection of CPU, memory, and I/O resource pools for allocation to VMs and applications. Ideally, these solutions should fit within the existing infrastructure, leverage the existing ecosystem of hardware and software, provide scale and choice of performance vs efficiency or utilization, and dynamically support VMs, regardless of size or location. I/O virtualization systems, done right, will provide companies with cost-efficient, flexible infrastructure, fully delivering on the benefits that virtualization promises.

More Stories By Craig Thompson

Craig Thompson is Vice President, Product Marketing at Aprius, where he brings diverse experience in corporate management, product marketing and engineering management. He has held various senior marketing roles in data communications, telecommunications and broadcast video, most recently with Gennum Corporation, based in Toronto, Canada. Prior to that he was Director of Marketing for Intel's Optical Platform Division where he was responsible for the successful ramp of 10Gb/s MSAs into the telecommunications market. Craig holds a Bachelor of Engineering with Honors from the University of New South Wales in Sydney, Australia, and an MBA from the Massachusetts Institute of Technology.

IoT & Smart Cities Stories
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER gives detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPOalso offers sp...
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
SYS-CON Events announced today that IoT Global Network has been named “Media Sponsor” of SYS-CON's @ThingsExpo, which will take place on June 6–8, 2017, at the Javits Center in New York City, NY. The IoT Global Network is a platform where you can connect with industry experts and network across the IoT community to build the successful IoT business of the future.
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time t...
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Disruption, Innovation, Artificial Intelligence and Machine Learning, Leadership and Management hear these words all day every day... lofty goals but how do we make it real? Add to that, that simply put, people don't like change. But what if we could implement and utilize these enterprise tools in a fast and "Non-Disruptive" way, enabling us to glean insights about our business, identify and reduce exposure, risk and liability, and secure business continuity?