Open Source Cloud Authors: Liz McMillan, Elizabeth White, Mehdi Daoudi, Jason Bloomberg, Yeshim Deniz

Related Topics: Containers Expo Blog, Open Source Cloud

Containers Expo Blog: Article

Understanding the Role Storage Plays in Virtual Environments

Achieve the necessary dynamics of scale and simplicity of management by leveraging virtualization

In 1998, a little-known company called VMware had just opened the doors of its Palo Alto office. Ever since computing moved from the mainframe to the desktop, the push had been bigger, faster, and more - more CPUs, more servers, more power, more cooling - and ultimately more complexity, more cost, and more waste. In the early 2000s, studies found that average utilization of CPUs across both data center and desktop was a meager 15 percent.

If someone had predicted then that the Intel and AMD roadmaps would dramatically shift from increased clock speed to multiple processing units within a core (multi-core), they would have been thrown out - just as if they had predicted that major server vendors such as Dell, HP, and IBM would be actively promoting tools and services that allow customers to buy less of their product today. These vendors don't have a choice - IT organizations are being overwhelmed with "server sprawl" - forced into a situation where their operating costs are scaling linearly, or worse, with their capital costs.

Understanding the proposed benefits of virtualization is not difficult - higher utilization rates and efficiency lead to less capital expenditures and operating costs for physical hardware. This benefit is only the tip of the iceberg, as enterprises have found that once they virtualize servers and desktops, they dramatically improve provisioning times, reliability and disaster recovery, and reduce IT operating expenses. As technology advances and IT environments are automated, the operating benefits to organizations will be enormous. Some use this shift to argue the IT challenge is solved and, from now on, IT will be relegated to a minor cost center at each organization; however, Gartner is reporting that approximately 15 percent of workloads are actually virtualized (desktop virtualization is in its infancy and tier 1/2 workloads have yet to be largely virtualized) so there must be a significant impediment. What is holding back widespread adoption? Traditional storage.

In the late 1990s, network attached storage (NAS) systems dramatically simplified the deployment of storage in enterprise IT environments and a new era of storage performance and stability had arrived. The NAS platform focused on the challenges at the time - transactional I/O, ease-of-deployment, rapid system provisioning - to meet the demands of many organizations growing their systems quickly. The individual storage demands were not significant in terms of capacity or performance; however, the actual number of necessary systems was significant. This approach to storage enabled organizations to consolidate standalone servers and dramatically simplify management and cost compared with direct-attached storage or storage area networks (SAN).

If someone had claimed at the time that they needed a single file system in tens of petabytes (PBs) instead of tens of GBs, they would have most likely been laughed out of the room. The idea that everyone, both business and consumer, would have near-ubiquitous access to high-speed Internet was the dream of every dot-com, yet here we are today, with multi-gigabyte files being routinely downloaded, copied, and archived, with single file-systems in production that are over a PB, and single file-system performance exceeding 50 Gbps. The explosion of data overwhelmed traditional SAN and NAS systems. New technology emerged that was purposely built to solve the next generation of storage and computing challenges: scale-out storage.

Scale-out storage departs from the traditional storage building blocks of controllers and shelves. The traditional model is to construct a single system using RAID groups, LUNs, and volumes and scale that by adding more controllers, more volumes, etc. Each storage system has volume, performance or data reliability limitations, so multiple systems must be deployed as an organization grows. These systems form a complicated web of "storage islands" and result in "storage sprawl" - the storage corollary to the challenge of server sprawl that IT administrators found themselves facing in the late '90s.

A scale-out storage system is engineered completely differently and from a clean slate; it eliminates the dependency on layers of abstraction (RAID, volumes, LUNs) and instead gives the file system direct knowledge of every bit and byte. A scale-out storage system is a single file system made up of independent devices - commonly referred to as "nodes" - each providing storage and processing resources to store and serve a portion of a customer's data. Each node is built using commodity hardware and connected via a private high-bandwidth, low-latency interconnect. It is ultimately a distributed storage system, one which provides seamless scaling capabilities and ubiquitous access to large amounts of data while providing a single management point and system namespace.

Virtualization is overwhelmingly traditional SAN and NAS systems. Built to serve small amounts of data very fast, with specific isolated workloads, these systems are simply incapable of providing a cost-effective, scalable environment. If you consolidate 10 servers or 10 desktops, you reduce 10 CPUs but you increase the workload on the underlying storage device. You now have 10x the amount of storage, 10x the required throughput and IOPS, and typically 10x the amount of storage administration. Not only are you increasing the burden on particular storage systems, you're increasing the complexity and reducing the predictability of the workloads.

In non-virtualized environments, the burden of moving compute resources and applications is high, so workloads are pre-provisioned with CPU, storage, and networking capabilities. If a workload dramatically changes, it is re-provisioned and migrated. In a virtualized environment; however, the application migration burden is significantly reduced and in many cases, workload migration can be automated for real-time utilization of physical resources. While this can help maximize compute resources, it exacerbates the burdens placed on traditional storage.

In a traditional storage environment, each volume or LUN is backed by a particular set of disks and fronted by an individual controller with CPU and memory. Since it is in the administrator's interests to allow workflows to move dynamically within a virtualized environment, it is much more difficult to calibrate the storage requirements effectively. Administrators have few choices with traditional SAN and NAS environments: either overprovision the storage or be forced to choose between maximizing performance and maximizing utilization. This fundamental trade-off occurs in a traditional system because of the need to directly align spindles with workloads - and rarely can an IT administrator both maximize the use of the capacity provided by those spindles and harness the full performance potential.

Growing this environment is equally challenging. Over time the increased burden of managing multiple systems (due to storage sprawl), the underutilization of storage capacity (wasted capex) and inherent poor and unreliable application performance has forced IT environments to look for a different solution. This is ultimately why virtualization has not more significantly penetrated mainstream IT environments - the limitations of traditional storage disproportionately increase costs as an environment scales, hindering performance, and causing increased management burden.

Scale-out storage eliminates the challenges of scaling virtualized and non-virtualized workloads alike. A scale-out system automatically uses all available storage, processing, and network resources that are part of the system, allowing those resources to scale transparently and on-the-fly. This unique capability enables an IT administrator to provision now for current workloads and dynamically scale the entire environment as necessary over time without over-provisioning. As the system scales, the IT administrator enjoys the convenience of a single file system - a single mount point/share for users and applications and a single point of administration - regardless of size.

As data is written into a scale-out storage system, it is automatically distributed to multiple nodes and written on multiple disks. This is typically done on a per-file basis, ensuring that no particular group of files is constrained or backed by the same set of nodes or disks, which enables a random distribution of data among the available resources. In addition to randomizing the placement of data across the system, each node utilizes its random-access memory (RAM) as a cache for the most active data blocks stored within that node - allowing that data to be quickly retrieved by another node when requested. When an additional node is added in order to scale performance or capacity, the system automatically redistributes data in order to ensure the correct balance among nodes, as well as the necessary randomization. This unique method of randomizing data placement across a scalable number of spindles and caching the most frequently used blocks allows scale-out storage systems to avoid the hot spots and bottlenecks that plague traditional storage systems, especially those that occur as a result of highly diverse virtualized workloads.

It is no longer necessary for a storage administrator to configure a particular LUN or volume for a set number of virtual machines and to group virtual machines by potential and peak throughput and I/O requirements. It is no longer necessary for the storage administrator to constantly move virtual machines between LUNs and volumes, attempting to adjust for changing performance requirements while simultaneously trying to maximize utilization of the underlying storage. With a scale-out storage system, an administrator can deploy a diverse set of workloads and allow the storage system to automatically adapt to those workloads without constant intervention - just as that same administrator allows virtual machines to be automatically balanced among hypervisors.

Traditional storage systems limit protection capabilities to RAID-6 or two-way clustered pairs - insurance against a single head failure or two simultaneous drive failures - which is not sufficient in large virtualized or non-virtualized storage environments consisting of hundreds or thousands of terabytes. In addition, traditional storage systems built on hardware-based RAID technology have a fundamental scaling challenge - as drives increase in density (but not in performance) these systems lose the ability to rebuild a drive in a timely manner and provide adequate protection. The only mitigation to this challenge in a traditional system is expensive mirroring techniques, driving up the costs and the complexity.

In contrast, a scale-out storage system offers extreme reliability in a very cost-effective manner, which is necessary due to the scale of data deployed. As data is written and distributed in the system, forward-error correcting codes (FECs) are used to generate recovery bits that can be used to seamlessly recover data in the event of multiple node or disk failures. A typical scale-out storage system can survive up to four simultaneous failures of either nodes or drives, regardless of the capacity of the drive. This unique capability is enabled by the intrinsic properties of the distributed storage system and the elimination of the layers of abstraction (RAID, volume manager, LUN). When a failure occurs in a scale-out storage system, multiple nodes can rebuild portions of the data set in parallel, dramatically decreasing the rebuild time while minimizing performance impact. Since the system is able to utilize FECs as a redundancy technique instead of mirroring, it can achieve high levels of protection with a very minimal level of overhead - as little as 20 percent to protect against four simultaneous failures.

All implementations of technology evolve and change over time, as do workloads and requirements. With a traditional storage system, as the equipment ages or the requirements change, the only true upgrade path is a replacement. The data must be migrated to a larger, newer system, resulting in downtime, complexity and a loss of investment. With a scale-out storage system, new or additional nodes can be added to existing nodes - allowing the evolution of the storage environment in a non-disruptive, cost-effective manner - maximizing the storage investment and reducing complexity.

Virtualization is here to stay. By 2013 Gartner estimates that 80 percent of server workloads will be virtualized. With the advancements in processing and hypervisor technology, most recommended deployments for applications will be in conjunction with virtualization. As virtualization takes root as an enabling technology, it will be a critical piece of next-generation applications and operating systems, leading to the next phase of IT data center automation. In order to build dynamic, cost-effective data centers, IT administrators will find themselves able to achieve the necessary dynamics of scale and simplicity of management only by leveraging virtualization, automation and scale-out storage.

More Stories By Nick Kirsch

Nick Kirsch has over nine years of experience designing and building distributed systems. He joined Isilon in 2002 as a software engineer and participated in the development of version 1.0 of the Isilon OneFS operating system. In April of 2008, he left his post as director of software engineering and moved to Isilon's product management group to focus on maintaining and extending Isilon's technology and product lead in scale-out NAS. He is currently the senior product manager for OneFS and Isilon’s suite of add-on software applications, including SyncIQ, SmartQuotas and SmartConnect.
Prior to joining Isilon, Nick spent three years at InsynQ, Inc., as the director of development. He holds Bachelor of Science degrees in Computer Science and Mathematics from the University of Puget Sound and a Master's degree in Computer Science from the University of Washington.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expe...
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER give you detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPO also offers s...
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...