Welcome!

Open Source Cloud Authors: Liz McMillan, Vaibhaw Pandey, Stackify Blog, Pat Romanski, Wesley Coelho

Related Topics: Containers Expo Blog, Open Source Cloud

Containers Expo Blog: Article

Understanding the Role Storage Plays in Virtual Environments

Achieve the necessary dynamics of scale and simplicity of management by leveraging virtualization

In 1998, a little-known company called VMware had just opened the doors of its Palo Alto office. Ever since computing moved from the mainframe to the desktop, the push had been bigger, faster, and more - more CPUs, more servers, more power, more cooling - and ultimately more complexity, more cost, and more waste. In the early 2000s, studies found that average utilization of CPUs across both data center and desktop was a meager 15 percent.

If someone had predicted then that the Intel and AMD roadmaps would dramatically shift from increased clock speed to multiple processing units within a core (multi-core), they would have been thrown out - just as if they had predicted that major server vendors such as Dell, HP, and IBM would be actively promoting tools and services that allow customers to buy less of their product today. These vendors don't have a choice - IT organizations are being overwhelmed with "server sprawl" - forced into a situation where their operating costs are scaling linearly, or worse, with their capital costs.

Understanding the proposed benefits of virtualization is not difficult - higher utilization rates and efficiency lead to less capital expenditures and operating costs for physical hardware. This benefit is only the tip of the iceberg, as enterprises have found that once they virtualize servers and desktops, they dramatically improve provisioning times, reliability and disaster recovery, and reduce IT operating expenses. As technology advances and IT environments are automated, the operating benefits to organizations will be enormous. Some use this shift to argue the IT challenge is solved and, from now on, IT will be relegated to a minor cost center at each organization; however, Gartner is reporting that approximately 15 percent of workloads are actually virtualized (desktop virtualization is in its infancy and tier 1/2 workloads have yet to be largely virtualized) so there must be a significant impediment. What is holding back widespread adoption? Traditional storage.

In the late 1990s, network attached storage (NAS) systems dramatically simplified the deployment of storage in enterprise IT environments and a new era of storage performance and stability had arrived. The NAS platform focused on the challenges at the time - transactional I/O, ease-of-deployment, rapid system provisioning - to meet the demands of many organizations growing their systems quickly. The individual storage demands were not significant in terms of capacity or performance; however, the actual number of necessary systems was significant. This approach to storage enabled organizations to consolidate standalone servers and dramatically simplify management and cost compared with direct-attached storage or storage area networks (SAN).

If someone had claimed at the time that they needed a single file system in tens of petabytes (PBs) instead of tens of GBs, they would have most likely been laughed out of the room. The idea that everyone, both business and consumer, would have near-ubiquitous access to high-speed Internet was the dream of every dot-com, yet here we are today, with multi-gigabyte files being routinely downloaded, copied, and archived, with single file-systems in production that are over a PB, and single file-system performance exceeding 50 Gbps. The explosion of data overwhelmed traditional SAN and NAS systems. New technology emerged that was purposely built to solve the next generation of storage and computing challenges: scale-out storage.

Scale-out storage departs from the traditional storage building blocks of controllers and shelves. The traditional model is to construct a single system using RAID groups, LUNs, and volumes and scale that by adding more controllers, more volumes, etc. Each storage system has volume, performance or data reliability limitations, so multiple systems must be deployed as an organization grows. These systems form a complicated web of "storage islands" and result in "storage sprawl" - the storage corollary to the challenge of server sprawl that IT administrators found themselves facing in the late '90s.

A scale-out storage system is engineered completely differently and from a clean slate; it eliminates the dependency on layers of abstraction (RAID, volumes, LUNs) and instead gives the file system direct knowledge of every bit and byte. A scale-out storage system is a single file system made up of independent devices - commonly referred to as "nodes" - each providing storage and processing resources to store and serve a portion of a customer's data. Each node is built using commodity hardware and connected via a private high-bandwidth, low-latency interconnect. It is ultimately a distributed storage system, one which provides seamless scaling capabilities and ubiquitous access to large amounts of data while providing a single management point and system namespace.

Virtualization is overwhelmingly traditional SAN and NAS systems. Built to serve small amounts of data very fast, with specific isolated workloads, these systems are simply incapable of providing a cost-effective, scalable environment. If you consolidate 10 servers or 10 desktops, you reduce 10 CPUs but you increase the workload on the underlying storage device. You now have 10x the amount of storage, 10x the required throughput and IOPS, and typically 10x the amount of storage administration. Not only are you increasing the burden on particular storage systems, you're increasing the complexity and reducing the predictability of the workloads.

In non-virtualized environments, the burden of moving compute resources and applications is high, so workloads are pre-provisioned with CPU, storage, and networking capabilities. If a workload dramatically changes, it is re-provisioned and migrated. In a virtualized environment; however, the application migration burden is significantly reduced and in many cases, workload migration can be automated for real-time utilization of physical resources. While this can help maximize compute resources, it exacerbates the burdens placed on traditional storage.

In a traditional storage environment, each volume or LUN is backed by a particular set of disks and fronted by an individual controller with CPU and memory. Since it is in the administrator's interests to allow workflows to move dynamically within a virtualized environment, it is much more difficult to calibrate the storage requirements effectively. Administrators have few choices with traditional SAN and NAS environments: either overprovision the storage or be forced to choose between maximizing performance and maximizing utilization. This fundamental trade-off occurs in a traditional system because of the need to directly align spindles with workloads - and rarely can an IT administrator both maximize the use of the capacity provided by those spindles and harness the full performance potential.

Growing this environment is equally challenging. Over time the increased burden of managing multiple systems (due to storage sprawl), the underutilization of storage capacity (wasted capex) and inherent poor and unreliable application performance has forced IT environments to look for a different solution. This is ultimately why virtualization has not more significantly penetrated mainstream IT environments - the limitations of traditional storage disproportionately increase costs as an environment scales, hindering performance, and causing increased management burden.

Scale-out storage eliminates the challenges of scaling virtualized and non-virtualized workloads alike. A scale-out system automatically uses all available storage, processing, and network resources that are part of the system, allowing those resources to scale transparently and on-the-fly. This unique capability enables an IT administrator to provision now for current workloads and dynamically scale the entire environment as necessary over time without over-provisioning. As the system scales, the IT administrator enjoys the convenience of a single file system - a single mount point/share for users and applications and a single point of administration - regardless of size.

As data is written into a scale-out storage system, it is automatically distributed to multiple nodes and written on multiple disks. This is typically done on a per-file basis, ensuring that no particular group of files is constrained or backed by the same set of nodes or disks, which enables a random distribution of data among the available resources. In addition to randomizing the placement of data across the system, each node utilizes its random-access memory (RAM) as a cache for the most active data blocks stored within that node - allowing that data to be quickly retrieved by another node when requested. When an additional node is added in order to scale performance or capacity, the system automatically redistributes data in order to ensure the correct balance among nodes, as well as the necessary randomization. This unique method of randomizing data placement across a scalable number of spindles and caching the most frequently used blocks allows scale-out storage systems to avoid the hot spots and bottlenecks that plague traditional storage systems, especially those that occur as a result of highly diverse virtualized workloads.

It is no longer necessary for a storage administrator to configure a particular LUN or volume for a set number of virtual machines and to group virtual machines by potential and peak throughput and I/O requirements. It is no longer necessary for the storage administrator to constantly move virtual machines between LUNs and volumes, attempting to adjust for changing performance requirements while simultaneously trying to maximize utilization of the underlying storage. With a scale-out storage system, an administrator can deploy a diverse set of workloads and allow the storage system to automatically adapt to those workloads without constant intervention - just as that same administrator allows virtual machines to be automatically balanced among hypervisors.

Traditional storage systems limit protection capabilities to RAID-6 or two-way clustered pairs - insurance against a single head failure or two simultaneous drive failures - which is not sufficient in large virtualized or non-virtualized storage environments consisting of hundreds or thousands of terabytes. In addition, traditional storage systems built on hardware-based RAID technology have a fundamental scaling challenge - as drives increase in density (but not in performance) these systems lose the ability to rebuild a drive in a timely manner and provide adequate protection. The only mitigation to this challenge in a traditional system is expensive mirroring techniques, driving up the costs and the complexity.

In contrast, a scale-out storage system offers extreme reliability in a very cost-effective manner, which is necessary due to the scale of data deployed. As data is written and distributed in the system, forward-error correcting codes (FECs) are used to generate recovery bits that can be used to seamlessly recover data in the event of multiple node or disk failures. A typical scale-out storage system can survive up to four simultaneous failures of either nodes or drives, regardless of the capacity of the drive. This unique capability is enabled by the intrinsic properties of the distributed storage system and the elimination of the layers of abstraction (RAID, volume manager, LUN). When a failure occurs in a scale-out storage system, multiple nodes can rebuild portions of the data set in parallel, dramatically decreasing the rebuild time while minimizing performance impact. Since the system is able to utilize FECs as a redundancy technique instead of mirroring, it can achieve high levels of protection with a very minimal level of overhead - as little as 20 percent to protect against four simultaneous failures.

All implementations of technology evolve and change over time, as do workloads and requirements. With a traditional storage system, as the equipment ages or the requirements change, the only true upgrade path is a replacement. The data must be migrated to a larger, newer system, resulting in downtime, complexity and a loss of investment. With a scale-out storage system, new or additional nodes can be added to existing nodes - allowing the evolution of the storage environment in a non-disruptive, cost-effective manner - maximizing the storage investment and reducing complexity.

Virtualization is here to stay. By 2013 Gartner estimates that 80 percent of server workloads will be virtualized. With the advancements in processing and hypervisor technology, most recommended deployments for applications will be in conjunction with virtualization. As virtualization takes root as an enabling technology, it will be a critical piece of next-generation applications and operating systems, leading to the next phase of IT data center automation. In order to build dynamic, cost-effective data centers, IT administrators will find themselves able to achieve the necessary dynamics of scale and simplicity of management only by leveraging virtualization, automation and scale-out storage.

More Stories By Nick Kirsch

Nick Kirsch has over nine years of experience designing and building distributed systems. He joined Isilon in 2002 as a software engineer and participated in the development of version 1.0 of the Isilon OneFS operating system. In April of 2008, he left his post as director of software engineering and moved to Isilon's product management group to focus on maintaining and extending Isilon's technology and product lead in scale-out NAS. He is currently the senior product manager for OneFS and Isilon’s suite of add-on software applications, including SyncIQ, SmartQuotas and SmartConnect.
Prior to joining Isilon, Nick spent three years at InsynQ, Inc., as the director of development. He holds Bachelor of Science degrees in Computer Science and Mathematics from the University of Puget Sound and a Master's degree in Computer Science from the University of Washington.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...
WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, introduced two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a multip...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
An increasing number of companies are creating products that combine data with analytical capabilities. Running interactive queries on Big Data requires complex architectures to store and query data effectively, typically involving data streams, an choosing efficient file format/database and multiple independent systems that are tied together through custom-engineered pipelines. In his session at @BigDataExpo at @ThingsExpo, Tomer Levi, a senior software engineer at Intel’s Advanced Analytics gr...