Click here to close now.

Welcome!

Open Source Authors: Pat Romanski, Mike Kavis, Elizabeth White, Liz McMillan, Yeshim Deniz

Related Topics: Virtualization, Java, Linux, Open Source, SDN Journal

Virtualization: Article

The Brave New World of Storage Virtualization

Can you put your virtual environment on autopilot?

I recently found myself intrigued by an article by Jon William Toigo on Tech Target titled - Software-defined infrastructure or how storage becomes software. In his article, Toigo poses the question: Could a software-defined infrastructure, with software-based controls and policies, be the answer to managing and allocating storage? While I am sure Jon would agree we're all tired of the buzz words around "software-defined-anything", the fact of the matter is that we all use it anyway for lack of a better description of what's occurring today.

Storage specifically is one of those areas I always say is like organizing your room: everyone has their own way of doing it.  You want your dresser to be a certain height.  You want you mattress pointing a certain direction and the light shining through your window on your desk at just the right time of day.  The truth is, storage is the under-pinning of virtualization that everyone wants to architect and manage the way they want to, and no one is going to tell them otherwise... UNTIL - wait for it - ...software can manage this FOR people.

Where is the robot (I wish the Roomba folks would hurry up with this, but I digress;) that keeps your room exactly the way you want it to be?  The industry seems constantly to toss this idea around, but no one really seems to know where to find the solution.  The truth is that no matter what quirks people have with their storage, there is one goal everyone has in common: Ensuring applications have ability to consume the storage resources they need while preserving priority and business logic; as well as increasing efficiency without introducing risk.  This is the promise of every storage vendor on the planet trying to sell their latest auto-tiering, de-dupe, compression solution and all the wonderful bling to trick out their man cave.

The problem, however, is that virtualization obscures the lines with storage and it becomes intractably complex to manage the macro-level supply and demand of resources occurring across the stack. Traditional management tools - storage- and virtualization-related - are running into the limitations of their stats-based, linear approach to managing diverse environments.

An Illustration
As an illustration, let's use a real-life example. Let's assume we have a NetApp environment supporting VMWare.  The NetApp consist of 4 Aggregates spread across two filers.

  • Aggr 1 SATA and Aggr 2 SATA are on Controller A, and each Aggr comprises of 2 Volumes in VMWare
  • Aggr 3 SAS and Aggr 4 SAS are on Controller B, and each Aggr comprises of 2 Volumes in VMWare

Aggr1 sees a spike in IOPS driven by 3 virtual machines demanding IOPS on its 2 Volumes.  This results in Aggr1 utilizing 93% of its available IOPS capacity due to several high consumers, or "Bully" VMs (to use a traditional storage vendor's language).  To compound the issue, the high utilization on disk has now manifested itself up the stack and begins to impact the ability of other workloads to access the storage resources they require on Aggr1.

BraveNew-a

In this example, a traditional software management system for the storage platform will alert an administrator that the Aggr1 has exceeded a tolerable utilization on IOPS and that it is time for an administrator to act.  Similarly, the virtualization vendor (in this case VMWare), will generate alarms related to the virtualized components layered on top of the storage platform.

BraveNew-b

The administrator must then siphon through the charts and graphs in their storage vendor's tool, or their virtualization management system, with the end goal being some sort of resource allocation decision to intelligently allocate storage resources to the applications that need them while avoiding quality of service disruption at the expense of low-priority applications.

More likely the administrator needs to access both of these interfaces to try and accomplish this.  In this example, the resource decision might be to move the volume to separate aggregate on Controller A (when it reality this won't do much due to performance constraints underneath), move the volume itself to faster disk associated with a separate storage controller, or move the virtual machine to a volume hosted on Controller B.

BraveNew-c

Now the second part of this equation (and arguably more difficult to get right) is: How does the administrator ensure that the domino they decide to push over doesn't create another resource constraint within the environment?  Fundamentally, traditional storage vendor software offerings and virtualization management tools are incapable of understanding the impact and outcome of any prospective resolution because they simply do not analyze the interdependencies of this decision across both (virtualized) compute and storage components.  The best case for operations is a head start on the troubleshooting process after quality of service has already been impacted or is in the process of being degraded.

Are Things Even at Human Scale Anymore?
In order to truly accomplish a software-defined storage system, there needs to be a new type of management system capable of connecting these two obscure worlds for the purpose of intelligent decision making and resource allocation.

Toigo paints this gap perfectly when he states, "Our storage needs to be managed and allocated by intelligent humans, with software-based controls and policies serving as a more efficient extension of our ability to translate business needs into automation support."

Following this logic, this new system must go above and beyond looking at application issues in isolation to determine how to properly allocate the infrastructure's entire supply of finite storage resources to every virtualized workload and application - at scale.  Inevitably, this means looking across all application resource demands concurrently and then determining how to service each application's request for the best cost/benefit to the overall platform by allocating the supply of storage resource intelligently and in the most efficient way.   Ideally, this will be done prescriptively - before quality of service is degraded.

The second phase of this brave new world will involve incorporating business logic that allows the software-driven control plane to consider business constraints alongside of capacity and performance metrics in real time.   If tier 1 applications need to have priority for faster disk over low-priority applications, then the system should be set it and forget it.  If tier 3 applications must be confined to bronze or slow storage, then the constraint should carry over dynamically for any workload matching this criterion that is provisioned across the lifecycle of the environment.

If 20% overhead needs to be maintained across tier-1 storage resources, then software should be intelligent enough to control utilization below this level, instead of notifying administrators once they have crossed it and forcing them to bring the infrastructure back from the brink.

The reality is that everyone has their own idea how to best trick out their room - in this case, their precious storage. Administrators will never truly be comfortable with putting their storage architecture on auto-pilot until they can rest assured that their policies are maintained while assuring application performance.  Any system developed to tackle this brave new world, must be able to solve both of these goals simultaneously - a challenge that Toigo argues is beyond human capacity to do so at scale.

More Stories By Eric Bannon

A passion for econometric analysis and statistical modeling led Eric to… wait for it… software. Eric discovered that by leveraging IT algorithms, based on the principles of supply and demand, software can solve some of the biggest challenges in infrastructure and cloud management today.

Joining VMTurbo in 2011, Eric now serves as a Solution Architect, where he helps organizations unlock the full value of virtualization through the implementation of software-defined control. He holds a B.S. in Economics and Finance from Bentley University, and still likes to deconstruct James Heckman’s econometric models in his free time.

@ThingsExpo Stories
When it comes to the Internet of Things, hooking up will get you only so far. If you want customers to commit, you need to go beyond simply connecting products. You need to use the devices themselves to transform how you engage with every customer and how you manage the entire product lifecycle. In his session at @ThingsExpo, Sean Lorenz, Technical Product Manager for Xively at LogMeIn, will show how “product relationship management” can help you leverage your connected devices and the data they generate about customer usage and product performance to deliver extremely compelling and reliabl...
SYS-CON Events announced today that CodeFutures, a leading supplier of database performance tools, has been named a “Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. CodeFutures is an independent software vendor focused on providing tools that deliver database performance tools that increase productivity during database development and increase database performance and scalability during production.
The IoT market is projected to be $1.9 trillion tidal wave that’s bigger than the combined market for smartphones, tablets and PCs. While IoT is widely discussed, what not being talked about are the monetization opportunities that are created from ubiquitous connectivity and the ensuing avalanche of data. While we cannot foresee every service that the IoT will enable, we should future-proof operations by preparing to monetize them with extremely agile systems.
There’s Big Data, then there’s really Big Data from the Internet of Things. IoT is evolving to include many data possibilities like new types of event, log and network data. The volumes are enormous, generating tens of billions of logs per day, which raise data challenges. Early IoT deployments are relying heavily on both the cloud and managed service providers to navigate these challenges. Learn about IoT, Big Data and deployments processing massive data volumes from wearables, utilities and other machines.
The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @ThingsExpo, Michael Sick, a Senior Manager and Big Data Architect within Ernst and Young's Financial Servi...
“In the past year we've seen a lot of stabilization of WebRTC. You can now use it in production with a far greater degree of certainty. A lot of the real developments in the past year have been in things like the data channel, which will enable a whole new type of application," explained Peter Dunkley, Technical Director at Acision, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Intelligent Systems Services will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Established in 1994, Intelligent Systems Services Inc. is located near Washington, DC, with representatives and partners nationwide. ISS’s well-established track record is based on the continuous pursuit of excellence in designing, implementing and supporting nationwide clients’ mission-critical systems. ISS has completed many successful projects in Healthcare, Commercial, Manufacturing, ...
PubNub on Monday has announced that it is partnering with IBM to bring its sophisticated real-time data streaming and messaging capabilities to Bluemix, IBM’s cloud development platform. “Today’s app and connected devices require an always-on connection, but building a secure, scalable solution from the ground up is time consuming, resource intensive, and error-prone,” said Todd Greene, CEO of PubNub. “PubNub enables web, mobile and IoT developers building apps on IBM Bluemix to quickly add scalable realtime functionality with minimal effort and cost.”
The major cloud platforms defy a simple, side-by-side analysis. Each of the major IaaS public-cloud platforms offers their own unique strengths and functionality. Options for on-site private cloud are diverse as well, and must be designed and deployed while taking existing legacy architecture and infrastructure into account. Then the reality is that most enterprises are embarking on a hybrid cloud strategy and programs. In this Power Panel at 15th Cloud Expo (http://www.CloudComputingExpo.com), moderated by Ashar Baig, Research Director, Cloud, at Gigaom Research, Nate Gordon, Director of T...
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
The Internet of Things (IoT) is causing data centers to become radically decentralized and atomized within a new paradigm known as “fog computing.” To support IoT applications, such as connected cars and smart grids, data centers' core functions will be decentralized out to the network's edges and endpoints (aka “fogs”). As this trend takes hold, Big Data analytics platforms will focus on high-volume log analysis (aka “logs”) and rely heavily on cognitive-computing algorithms (aka “cogs”) to make sense of it all.
The Internet of Everything (IoE) brings together people, process, data and things to make networked connections more relevant and valuable than ever before – transforming information into knowledge and knowledge into wisdom. IoE creates new capabilities, richer experiences, and unprecedented opportunities to improve business and government operations, decision making and mission support capabilities. In his session at @ThingsExpo, Gary Hall, Chief Technology Officer, Federal Defense at Cisco Systems, will break down the core capabilities of IoT in multiple settings and expand upon IoE for bo...
Sensor-enabled things are becoming more commonplace, precursors to a larger and more complex framework that most consider the ultimate promise of the IoT: things connecting, interacting, sharing, storing, and over time perhaps learning and predicting based on habits, behaviors, location, preferences, purchases and more. In his session at @ThingsExpo, Tom Wesselman, Director of Communications Ecosystem Architecture at Plantronics, will examine the still nascent IoT as it is coalescing, including what it is today, what it might ultimately be, the role of wearable tech, and technology gaps stil...
With several hundred implementations of IoT-enabled solutions in the past 12 months alone, this session will focus on experience over the art of the possible. Many can only imagine the most advanced telematics platform ever deployed, supporting millions of customers, producing tens of thousands events or GBs per trip, and hundreds of TBs per month. With the ability to support a billion sensor events per second, over 30PB of warm data for analytics, and hundreds of PBs for an data analytics archive, in his session at @ThingsExpo, Jim Kaskade, Vice President and General Manager, Big Data & Ana...
For years, we’ve relied too heavily on individual network functions or simplistic cloud controllers. However, they are no longer enough for today’s modern cloud data center. Businesses need a comprehensive platform architecture in order to deliver a complete networking suite for IoT environment based on OpenStack. In his session at @ThingsExpo, Dhiraj Sehgal from PLUMgrid will discuss what a holistic networking solution should really entail, and how to build a complete platform that is scalable, secure, agile and automated.
We’re no longer looking to the future for the IoT wave. It’s no longer a distant dream but a reality that has arrived. It’s now time to make sure the industry is in alignment to meet the IoT growing pains – cooperate and collaborate as well as innovate. In his session at @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, will examine the key ingredients to IoT success and identify solutions to challenges the industry is facing. The deep industry expertise behind this presentation will provide attendees with a leading edge view of rapidly emerging IoT oppor...
In the consumer IoT, everything is new, and the IT world of bits and bytes holds sway. But industrial and commercial realms encompass operational technology (OT) that has been around for 25 or 50 years. This grittier, pre-IP, more hands-on world has much to gain from Industrial IoT (IIoT) applications and principles. But adding sensors and wireless connectivity won’t work in environments that demand unwavering reliability and performance. In his session at @ThingsExpo, Ron Sege, CEO of Echelon, will discuss how as enterprise IT embraces other IoT-related technology trends, enterprises with i...
One of the biggest impacts of the Internet of Things is and will continue to be on data; specifically data volume, management and usage. Companies are scrambling to adapt to this new and unpredictable data reality with legacy infrastructure that cannot handle the speed and volume of data. In his session at @ThingsExpo, Don DeLoach, CEO and president of Infobright, will discuss how companies need to rethink their data infrastructure to participate in the IoT, including: Data storage: Understanding the kinds of data: structured, unstructured, big/small? Analytics: What kinds and how responsiv...
Cloudian, Inc., the leading provider of hybrid cloud storage solutions, today announced availability of Cloudian HyperStore 5.1 software. HyperStore 5.1 is an enhanced Amazon S3-compliant, plug-and-play hybrid cloud software solution that now features full Apache Hadoop integration. Enterprises can now transform big data into smart data by running Hadoop analytics on HyperStore software and appliances. This in-place analytics, with no need to offload data to other systems for Hadoop analyses, enables customers to derive meaningful business intelligence from their data quickly, efficiently and ...
Since 2008 and for the first time in history, more than half of humans live in urban areas, urging cities to become “smart.” Today, cities can leverage the wide availability of smartphones combined with new technologies such as Beacons or NFC to connect their urban furniture and environment to create citizen-first services that improve transportation, way-finding and information delivery. In her session at @ThingsExpo, Laetitia Gazel-Anthoine, CEO of Connecthings, will focus on successful use cases.