Open Source Cloud Authors: John Mertic, Derek Weeks, Yeshim Deniz, Elizabeth White, Pat Romanski

Related Topics: Containers Expo Blog, Microsoft Cloud, Open Source Cloud, @CloudExpo, Cloud Security, SDN Journal

Containers Expo Blog: Article

Can You Trust VDI Storage Benchmarks?

The truth behind VDI benchmarks

by George Crump, Storage Switzerland

VDI (Virtual Desktop Infrastructure) implementation projects are going to be priorities for many IT Managers in 2013 and a key concern will be end-user acceptance. If the users don't embrace their virtual desktops they won't use them and the project is doomed to failure. The key to acceptance is to provide users with an environment that feels the same, performs better and is more reliable than their current stand-alone system. The storage system bears most of the responsibility in delivering that experience.

IT managers who want to capitalize on the opportunity that the virtual desktop environment can focus on two key capabilities when they evaluate storage system vendors. The first is being able to deliver the raw performance that the virtual desktop architecture needs and the second is doing so in the most cost effective way possible. These are two capabilities that are traditionally at odds with each other and not always well-reflected in benchmark testing.

For most organizations the number-one priority for gaining user acceptance is to keep the virtual desktop experience as similar to the physical desktop as possible. Typically, this will mean using persistent desktops, a VDI implementation in which each user's desktop is a stand-alone element in the virtual environment for which they can customize settings and add their own applications just like they could on their physical desktop.

The problem with persistent desktops is that a unique image is created for each desktop or user, which can add up to thousands of images for larger VDI populations. Obviously, allocating storage for thousands of virtual desktops is a high price to pay for maintaining a positive user experience.

In an effort to reduce the amount of storage required for all of these images, virtualized environments have incorporate features such as thin provisioning and linked clones. The goal is to have the storage system deliver a VDI environment that's built from just a few thinly provisioned ‘golden' VDI images, which are then cloned for each user.

As users customize their clones, only the differences between the golden image and the users' VDIs need to be stored. The result is a significant reduction in the total amount of storage required, lowering its overall cost. Also, the small number of golden images allows for much of the VDI read traffic to be served from a flash-based tier or cache.

When a write occurs from a thinly provisioned, cloned virtual desktop more has to happen then just the operation to write that data object. The volume needs to have additional space allocated to it (one write operation), the metadata table that tracks unique branches of the cloned volume has to be updated (another write operation) and some sort of parity data needs to be written, depending on the RAID protection in place. Then, finally, the data object is written. This entire process has to happen with each data change no matter how small.

Herein lays the tradeoff in using these features. While reducing the amount of space required for the VDI images, thin provisioning and cloning increase the demand for high write performance in the storage system. This presents a significant opportunity for storage system vendors who can address these new performance requirements.

Many storage systems that use a mix of flash memory and hard disk technology don't use the higher performing flash for writes; they use it for actively reading data. While these storage systems have storage controllers designed to handle high read loads, the increased write activity generated by thin provisioning and cloning is still going to relatively slow hard disk drives. Because this type of I/O traffic is highly random, the hard drives are constantly "thrashing about". Basically the controller sits idle while it waits for the hard disk to rotate into position to complete each write command. Even systems with an SSD tier or cache may have problems providing adequate performance because they too don't leverage the high speed flash for write traffic.

Due to the high level of thin provisioning and cloning, plus the fact that once a desktop is created a large part of its I/O is write traffic, many cached or tiered systems do not perform well in real-world VDI environments and can provide misleading VDI Benchmark scores.

The Truth Behind VDI Benchmarks
Most VDI Benchmarks focus primarily on one aspect of the VDI experience, the time it takes to boot a given number of virtual desktops. The problem with using a "boot storm test" is that this important but read-heavy event is only a part of the overall VDI storage challenge. During most of the day desktops are writing data, not reading it. In addition, simple activities such as logging out and application updates are very write-intensive. The capability of a storage system to handle these write activities is not measured by many VDI benchmarking routines.

A second problem with many VDI benchmarking claims is that for their testing configuration they do not use thinly provisioned and cloned volumes. Instead, they use thick volumes in order to show maximum VDI performance.

As discussed above, in order to keep user adoption high and costs low most VDI implementations would preferentially use persistent desktops with thin provisioning and cloning. Be wary of vendors claiming a single device can support over 1000 VDI users. These claims are usually based on the amount of storage that a typical VDI user might need as opposed to the Read/Write IOPS performance they will most likely need.

Trustworthy VDI Performance
A successful VDI project is one that gains end-user acceptance while reducing desktop support costs. The cost of a storage system that can provide thin provisioning, cloning and an adequate sized flash storage area to support the virtual environment could be too high for some enterprises to afford.  And, an additional cost could be incurred with the performance problems that are likely to appear after the initial desktop boot is completed because of the high level of write I/O.

The simplest solution may be to deploy a solid state appliance like Astute Networks ViSX for VDI. These devices are 100% solid state storage to provide high performance on both reads AND writes. This means that boot performance is excellent and performance throughout the day is maintained as well.

With a solid state based solution to the above problems, performance will not be an issue, but cost may still be. Even though it can provide consistent read/write performance throughout the day for a given number of virtual desktops, the cost per desktop of a flash based solution can be significantly higher than a hard drive based system.

However, it's likely in larger VDI environments (400+ users) that flash-based systems are really the only viable alternative to meet the performance requirements which can easily exceed 100 IOPS per user. Fortunately, flash-based systems can also produce efficiencies that bring down that cost in addition to the well-known benefits of using 1/10th the floor space, power and cooling compared to traditional storage systems.

First, the density of virtual desktops per host can be significantly higher with a flash appliance. And, the system is unaffected by the increase in random I/O as the density of virtual machines increases.

Second, the speed of the storage device compensates for the increased demands of thin provisioning and cloning operations run on the hypervisor. These data reduction services can now be used without a performance penalty. This means that the cost of a storage system with a more powerful storage controller and expensive data services like thin provisioning and cloning can be avoided.

Finally, the flash appliance is designed to tap into more of the full potential of solid state-based storage. For example, Astute uses a unique DataPump Engine protocol processor that's designed to specifically accelerate data onto and off of the network and through the appliance to the fast flash storage. This lowers the cost per IOPS compared to other flash-based storage systems.

Most legacy storage systems use traditional networking components and get nowhere near the full potential of flash. In short, the appliance can deliver better performance with the same amount of flash memory space. This leads to further increases in virtual machine density and space efficiency because more clones can be made - resulting in very low cost per VDI user.


VDI benchmark data can be useful but the test itself must be analyzed. Users should look for tests that not only focus on boot performance but also performance throughout the day, and at the end of the day. If systems with a mix of flash and HDD are used then enough flash must be purchased to avoid a cache miss, since these systems rarely have enough disk spindles to provide adequate secondary performance.

A simpler and better performing solution may be to use a solid state appliance like those available from Astute Networks. These allow for consistent, high performance throughout the day at a cost per IOPS that hybrid and traditional storage vendors can't match. Their enablement of the built-in hypervisor capabilities, like thin provisioning, cloning and snapshots, also means that they can be deployed very cost effectively.


George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments.

More Stories By Derek Kol

Derek Kol is a technology specialist focused on SMB and enterprise IT innovations.

@ThingsExpo Stories
Enterprise IT has been in the era of Hybrid Cloud for some time now. But it seems most conversations about Hybrid are focused on integrating AWS, Microsoft Azure, or Google ECM into existing on-premises systems. Where is all the Private Cloud? What do technology providers need to do to make their offerings more compelling? How should enterprise IT executives and buyers define their focus, needs, and roadmap, and communicate that clearly to the providers?
Successful digital transformation requires new organizational competencies and capabilities. Research tells us that the biggest impediment to successful transformation is human; consequently, the biggest enabler is a properly skilled and empowered workforce. In the digital age, new individual and collective competencies are required. In his session at 19th Cloud Expo, Bob Newhouse, CEO and founder of Agilitiv, will draw together recent research and lessons learned from emerging and established ...
One of biggest questions about Big Data is “How do we harness all that information for business use quickly and effectively?” Geographic Information Systems (GIS) or spatial technology is about more than making maps, but adding critical context and meaning to data of all types, coming from all different channels – even sensors. In his session at @ThingsExpo, William (Bill) Meehan, director of utility solutions for Esri, will take a closer look at the current state of spatial technology and ar...
SYS-CON Events announced today that Streamlyzer will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Streamlyzer is a powerful analytics for video streaming service that enables video streaming providers to monitor and analyze QoE (Quality-of-Experience) from end-user devices in real time.
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
SYS-CON Media announced today that @WebRTCSummit Blog, the largest WebRTC resource in the world, has been launched. @WebRTCSummit Blog offers top articles, news stories, and blog posts from the world's well-known experts and guarantees better exposure for its authors than any other publication. @WebRTCSummit Blog can be bookmarked ▸ Here @WebRTCSummit conference site can be bookmarked ▸ Here
Cloud based infrastructure deployment is becoming more and more appealing to customers, from Fortune 500 companies to SMEs due to its pay-as-you-go model. Enterprise storage vendors are able to reach out to these customers by integrating in cloud based deployments; this needs adaptability and interoperability of the products confirming to cloud standards such as OpenStack, CloudStack, or Azure. As compared to off the shelf commodity storage, enterprise storages by its reliability, high-availabil...
The IoT industry is now at a crossroads, between the fast-paced innovation of technologies and the pending mass adoption by global enterprises. The complexity of combining rapidly evolving technologies and the need to establish practices for market acceleration pose a strong challenge to global enterprises as well as IoT vendors. In his session at @ThingsExpo, Clark Smith, senior product manager for Numerex, will discuss how Numerex, as an experienced, established IoT provider, has embraced a ...
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in Embedded and IoT solutions, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 7-9, 2017, at the Javits Center in New York City, NY. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and ...
SYS-CON Events announced today that SoftNet Solutions will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. SoftNet Solutions specializes in Enterprise Solutions for Hadoop and Big Data. It offers customers the most open, robust, and value-conscious portfolio of solutions, services, and tools for the shortest route to success with Big Data. The unique differentiator is the ability to architect and ...
“Media Sponsor” of SYS-CON's 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. CloudBerry Backup is a leading cross-platform cloud backup and disaster recovery solution integrated with major public cloud services, such as Amazon Web Services, Microsoft Azure and Google Cloud Platform.
In past @ThingsExpo presentations, Joseph di Paolantonio has explored how various Internet of Things (IoT) and data management and analytics (DMA) solution spaces will come together as sensor analytics ecosystems. This year, in his session at @ThingsExpo, Joseph di Paolantonio from DataArchon, will be adding the numerous Transportation areas, from autonomous vehicles to “Uber for containers.” While IoT data in any one area of Transportation will have a huge impact in that area, combining sensor...
In the next forty months – just over three years – businesses will undergo extraordinary changes. The exponential growth of digitization and machine learning will see a step function change in how businesses create value, satisfy customers, and outperform their competition. In the next forty months companies will take the actions that will see them get to the next level of the game called Capitalism. Or they won’t – game over. The winners of today and tomorrow think differently, follow different...
More and more brands have jumped on the IoT bandwagon. We have an excess of wearables – activity trackers, smartwatches, smart glasses and sneakers, and more that track seemingly endless datapoints. However, most consumers have no idea what “IoT” means. Creating more wearables that track data shouldn't be the aim of brands; delivering meaningful, tangible relevance to their users should be. We're in a period in which the IoT pendulum is still swinging. Initially, it swung toward "smart for smar...
@ThingsExpo has been named the Top 5 Most Influential M2M Brand by Onalytica in the ‘Machine to Machine: Top 100 Influencers and Brands.' Onalytica analyzed the online debate on M2M by looking at over 85,000 tweets to provide the most influential individuals and brands that drive the discussion. According to Onalytica the "analysis showed a very engaged community with a lot of interactive tweets. The M2M discussion seems to be more fragmented and driven by some of the major brands present in the...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Penta Security is a leading vendor for data security solutions, including its encryption solution, D’Amo. By using FPE technology, D’Amo allows for the implementation of encryption technology to sensitive data fields without modification to schema in the database environment. With businesses having their data become increasingly more complicated in their mission-critical applications (such as ERP, CRM, HRM), continued ...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Arch...
SYS-CON Events announced today that Enzu will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Enzu’s mission is to be the leading provider of enterprise cloud solutions worldwide. Enzu enables online businesses to use its IT infrastructure to their competitive advantage. By offering a suite of proven hosting and management services, Enzu wants companies to focus on the core of their online busine...