Open Source Cloud Authors: Liz McMillan, Jason Bloomberg, Yeshim Deniz, Stackify Blog, Vaibhaw Pandey

Related Topics: Containers Expo Blog, Open Source Cloud

Containers Expo Blog: Article

Top Five Considerations When Evaluating I/O Virtualization

Delivering on the promise of virtualization

Most commonly, the term "virtualization" refers to the creation of virtual instances of an operating system or virtual machines (VMs) on physical server hardware, known as server virtualization. However, the development of server virtualization has led to the creation of other virtual resource types, including storage, network and I/O resources. Whether a server is physical or virtual, it should be "balanced" with the right amount of resources to achieve its intended operation without waste. Logically, it makes sense to include all the server resources (CPU, memory, storage, I/O, network, etc.) in this bundle of virtual devices. Furthermore, since the virtual server can be placed anywhere, anytime within the available physical infrastructure - improving flexibility, utilization, availability and operations - it stands to reason that the resources it draws from should also be mobile.

As with server virtualization, I/O virtualization can be defined as the logical abstraction of server I/O (including network and SAN connections, direct-attached storage, coprocessor offload, video graphics, etc.) into many virtual resources, allowing for the balanced matching of I/O to physical and virtual servers. The technologies developed to make this happen are being integrated into server hardware and software today, with contributions from numerous vendors and standards bodies throughout the ecosystem. For a good explanation of I/O virtualization, please refer to PCI-SIG I/O virtualization (IOV) specifications for an overview of the underlying technologies and their standards.

For an effective understanding of I/O virtualization, a number of factors should be taken into consideration, including the choice of I/O and associated drivers, the scalability of the solution, compatibility with existing infrastructure and, most important, how the I/O is managed relative to other virtual resources. This can be summarized by the diagram shown in Figure 1.

Any Card
I/O virtualization (IOV) systems, by definition, provide virtual instances of I/O (as defined by type, bandwidth, policy and ID or address) to virtual or physical servers. The most common method of assigning I/O to a server, up to this point, has been to place an I/O card in the server, typically via a PCI Express (PCIe) I/O slot. Ideally an IOV solution should leverage the established ecosystem of cards for servers using the PCIe interface. Anything less severely limits end-user I/O choice, limits application flexibility, and increases the overall cost of a given application solution.

The goal of IOV, whether it is a single card in a single server or a group of cards addressing many servers, is to provide a pool of resources from which a dynamic, flexible infrastructure can draw upon at any time. Diverse applications, differing performance requirements, the use of specialized systems, and even the maintenance of legacy infrastructure dictate a wide range of I/O, certainly more than simple Ethernet and Fibre Channel. An IOV system should provide the means to present any I/O type to any server, physical or virtual.

Data center architects and managers also understand the benefit of careful qualification of solution components (cards, drivers, cables, switches, management software) and the value that choice provides. Proprietary IOV systems and fixed-port switches remove choice and ask IT professionals to change the way they select and qualify a solution, particularly when it comes to I/O. Open, standards-based IOV systems provide the widest choice of I/O and maintain the flexibility that has allowed data center decision makers to control quality, maintain performance, and reduce operational cost.

Native Drivers
Hand in hand with maintaining choice of I/O card, an IOV system should maintain and leverage the investment in host drivers across all operating systems, hypervisors, and cards. This is perhaps the most critical and least appreciated aspect of system performance, reliability, and ease of use in the modern data center. While it's unlikely that all drivers work in all system configurations (with or without IOV), it is reasonable to expect that drivers delivered by the card vendors are capable of working within an IOV configuration and have the support of the I/O vendor in that configuration.

Modification of existing drivers for use in IOV implementations is expected and even preferable to optimize performance, usage models, and reliability, but this should be done with the support of the I/O vendor, preferably by the I/O vendor. This generally means that the I/O being virtualized should look, act, and respond the same way, regardless of whether it is placed in an IOV system or within the server. Again, look for standards-based approaches that leverage existing vendor investment in drivers.

Many Servers
An IOV system and the pool of I/O that it presents should scale and address one or many servers. Single-root I/O virtualization (SR-IOV) was developed to address the virtualization of an I/O card in a single server. This has been a tremendous success and is being adopted throughout the chip-set, OS, card and driver ecosystems. However, it was recognized early on that the value of IOV increases dramatically when expanded from a single server to a group of servers. A standard known as multi-root I/O virtualization (MR-IOV) was proposed to address this opportunity. Unfortunately, for a variety of reasons not the least of which are limitations on how a MR-IOV system interconnects multiple servers and scales beyond a small cluster, MR-IOV was not adopted and is unlikely to become the preferred solution.

Application requirements, space, power and budget all dictate server selection, which in turn determine rack and aisle configuration. The choices are unlimited, and so is the reality of the modern data center. There is no typical configuration, nor is there a formula or form factor that satisfies all customers and applications. Whether it is a high-density blade server solution or a flexible, expandable rack-mount server solution, the IOV solution must flex and scale with the customer's needs. It's the same with I/O type. For certain circumstances, it is the raw performance of the I/O that matters. Here the cost and management gains of higher utilization rates under IOV must be balanced against the performance requirements of the application. In these cases, the I/O should be closely coupled to the servers and closely managed to the specific bandwidth, latency and policy requirements of the application. The I/O resource pool must be easily expandable without affecting server or I/O operation, and the server must have direct access to the I/O pool to maximize performance.

In other cases, the I/O is not a performance bottleneck and higher utilization rates can be achieved. Perhaps the I/O is time-shared by a large number of servers, each for a limited amount of time, as is often the case with backup operations. Or perhaps the I/O is required by only a small number of VMs at a time, but those VMs can move and reside on any number of physical servers. In this case, it may be more efficient to share the I/O resource pool across a large number of servers using an appropriate level of subscription.

In either circumstance, the IOV system should scale from one to dozens or even hundreds of servers.

Existing Networks
I/O Virtualization systems are not intended to replace the function of network access switches, nor should they mandate a change in how the network or SAN is configured and operated. Rather, IOV systems should be viewed as an extension of the server system, complimenting rather than replacing existing network infrastructure. As mentioned previously, there are two basic models of IOV (direct server-attachment and network-attachment) that determine where in the data center architecture the IOV system resides. Ideally, a given IOV system should accommodate both attachment models.

As the name suggests, server-attached IOV is a direct extension of the server system providing an "on-ramp" to existing network and storage infrastructure. In this case, the IOV system provides the very same I/O that would be placed directly in the server (Fibre Channel HBAs, iSCSI initiators, network interfaces, etc.), and thus connects and interoperates with network and storage infrastructure the same way as before. In this case, the benefit that IOV provides is to increase I/O utilization rates, reducing the number of physical I/O ports needed and therefore reducing the number of physical switch ports needed. While this may allow the use of more cost-effective network and SAN gear from a chosen networking vendor, it's just as likely that IOV is used to extend the utilization and life of existing infrastructure.

IOV systems can also be provisioned as network-attached resources, much like storage or a security appliance can be a network resource. Connecting I/O to servers via a network may seem backwards, but once you consider that I/O is a virtual resource for virtual machines, it stands to reason that I/O can be treated as a pool of resources accessible to VMs via the Ethernet connections available to all VMs. Another way to think about this is to consider the emerging Fibre Channel over Ethernet (FCoE) standards utilizing Ethernet as the common transport. Here, both FC and Ethernet traffic is carried over the common Ethernet network to an FCoE switch that then provides both Ethernet and FC ports from the network. In much the same way other I/O resources (SAS, offload cards, co-processors, flash, etc.) can also be provided to groups of servers via a network. Ideally an IOV solution should provide generalized I/O access over a common transport network (e.g., Ethernet) using standards-based protocols (e.g., PCI Express) and whatever network and storage infrastructure that is used or preferred.

Management Integration
Traditionally I/O has been tightly coupled to a server and thus tightly managed as part of the server system. Mostly this meant little more than inserting a card, installing a driver, and monitoring status and performance via SNMP or similar mechanisms. Virtualization has fundamentally changed how I/O is provisioned, monitored, and managed. No longer is I/O simply a card that is physically installed and treated as a static block of bandwidth that may or may not be utilized, but left running until failure or obsolescence. The load generated by many virtual servers is now highly variable and dynamic. A single physical network connection can now be shared by multiple VMs, all of which are mobile and dynamic themselves, all carrying different and varying traffic types and loads. Therefore I/O must be:

  • Provisioned on-demand
  • Monitored for status and performance
  • Actively managed to an ever changing set of rules

The new mantra for the data center manager must become MAKE, MONITOR, MANAGE, and REPEAT.

Of course, the means by which the I/O is managed should be consistent with the practices and processes already established for virtualization management. Any IOV system should provide a plug-in interface to the most common virtualization management tools. For the wide variety of other management platforms and practices, an API and scriptable CLI must be made available.

Delivering on the Promise of Virtualization
Data center managers require technologies that dynamically provision larger, more varied workloads using many VMs spread across larger pools of physical infrastructure. In the near future, these professionals must give consideration to solutions that automate and manage the balance and collection of CPU, memory, and I/O resource pools for allocation to VMs and applications. Ideally, these solutions should fit within the existing infrastructure, leverage the existing ecosystem of hardware and software, provide scale and choice of performance vs efficiency or utilization, and dynamically support VMs, regardless of size or location. I/O virtualization systems, done right, will provide companies with cost-efficient, flexible infrastructure, fully delivering on the benefits that virtualization promises.

More Stories By Craig Thompson

Craig Thompson is Vice President, Product Marketing at Aprius, where he brings diverse experience in corporate management, product marketing and engineering management. He has held various senior marketing roles in data communications, telecommunications and broadcast video, most recently with Gennum Corporation, based in Toronto, Canada. Prior to that he was Director of Marketing for Intel's Optical Platform Division where he was responsible for the successful ramp of 10Gb/s MSAs into the telecommunications market. Craig holds a Bachelor of Engineering with Honors from the University of New South Wales in Sydney, Australia, and an MBA from the Massachusetts Institute of Technology.

@ThingsExpo Stories
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER give you detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPO also offers s...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
The IoT Will Grow: In what might be the most obvious prediction of the decade, the IoT will continue to expand next year, with more and more devices coming online every single day. What isn’t so obvious about this prediction: where that growth will occur. The retail, healthcare, and industrial/supply chain industries will likely see the greatest growth. Forrester Research has predicted the IoT will become “the backbone” of customer value as it continues to grow. It is no surprise that retail is ...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expe...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...