Click here to close now.


Open Source Cloud Authors: Elizabeth White, Liz McMillan, Pat Romanski, Craig Lowell, Yeshim Deniz

Related Topics: @CloudExpo, Java IoT, Open Source Cloud, Containers Expo Blog, @BigDataExpo, SDN Journal

@CloudExpo: Article

VCE: Driving the Velocity of Change in Cloud Computing

VCE's new specialized systems are key to de-risking mission critical application deployments

When you think Cloud, whether Private or Public, one of the key advantages that comes to mind is speed of deployment. All businesses crave the ability to simply go to a service portal, define their infrastructure requirements and immediately have a platform ready for their new application. Coupled with that you instantly have service level agreements that generally center on uptime and availability. So for example, instead of being a law firm that spends most of its budget on an in house IT department and datacenter, the Cloud provides an unavoidable opportunity for businesses to instead procure infrastructure as a service and consequently focus on delivering their key applications. But while the understanding of Cloud Computing and its benefits have matured within the industry, so too has the understanding that maybe what's currently being offered still isn't good enough for their mission critical applications. The reality is that there is still a need for a more focused and refined understanding of what the service level agreements should be and ultimately a more concerted approach towards the applications. So while neologisms such as speed, agility and flexibility remain synonymous with Cloud Computing, its success and maturity ultimately depend upon a new focal point, namely velocity.

Velocity bears a distinction from speed in that it's not just a measure of how fast an object travels but also in what direction that object moves. For example in a Public Cloud whether that be Amazon, Azure or Google no one can dispute the speed. Through only the clicks of a button you have a ready-made server that can immediately be used for testing and development purposes. But while it may be quick to deploy, how optimised is it for your particular environment, business or application requirements? With only generic forms the specific customization to a particular workload or business requirement fails to be achieved as optimization is sacrificed for the sake of speed. Service levels based on uptime and availability are not an adequate measure or guarantee for the successful deployment of an application. For example it would be considered ludicrous to purchase a laptop from a service provider that merely stipulates a guarantee that it will remain powered on even though it performs atrociously.

In the Private Cloud or traditional IT example, while the speed to deployment is not as quick as that of a public cloud, there are other scenarios where speed is being witnessed yet failing to produce the results required for a maturing Cloud market. Multiple infrastructure silos will constantly be seen to be hurrying around, busily firefighting and maintaining "the keeping the lights on culture" all at rapid speed. Yet while the focus should be on the applications that need to be delivered, being caught in the quagmire of the underlying infrastructure persistently takes precedent with IT admin having to constantly deal with interoperability issues, firmware upgrades, patches and multi-management panes of numerous components. Moreover service offerings such as Gold, Silver, Bronze or Platinum are more often than not centered around infrastructure metrics such as number of vCPUs, Storage RAID type, Memory etc. instead of application response times that are predictable and scalable to the end user's stipulated demands.

For Cloud to embrace the concept of velocity the consequence would be a focused and rigorous approach that has a direction aimed solely at the successful deployment of applications that in turn enable the business to quickly generate revenue. All the pieces of the jigsaw that go into attaining that quick and focused approach would require a mentality of velocity being adopted comprehensively from each silo of the infrastructure team while concurrently working in cohesion with the application team to deliver value to the business. This approach would also entail a focused methodology to application optimization and consequently a service level that measured and targeted its success based on application performance as opposed to just uptime and availability.

While some Cloud and service providers may claim that they already work in unison with a focus on applications, it is rarely the case behind the scenes as they too are caught in the challenge of traditional build it yourself IT. Indeed it's well known that some Cloud hosting providers are duping their end users with pseudo service portals where only the impression of an automated procedure for deploying their infrastructure is actually provided. Instead service portals that actually only populate a PDF of the requirements which are then printed out and sent to an offshore admin who in turn provisions the VM as quickly as possible are much closer to the truth. Additionally it's more than likely that your Private Cloud or service provider has a multi-tenant infrastructure with mixed workloads that sits behind the scenes as logical pools ready to be carved up for your future requirements. While this works for the majority of workloads and SMB applications, with more businesses looking to place more critical and demanding applications into their Private Cloud to attain the benefits of chargeback etc. they need an assurance of an application response time that is almost impossible to guarantee on a mixed workload infrastructure. As the Cloud market matures and the expectations that come with it with regards to application delivery and performance, such procedures and practices will only be suitable for certain markets and workloads.

So for velocity to take precedent within the Private Cloud, Cloud or even Infrastructure as a Service model and to fill this Cloud maturity void, infrastructure needs to be delivered with applications as their focal point. That consequently means a pre-integrated, pre-validated, pre-installed and application certified appliance that is standardized as a product and optimised to meet scalable demands and performance requirements. This is why the industry will soon start to see a new emergence of specialized systems specifically designed and built from inception for performance optimization of specific application workloads. By having applications pre-installed, certified and configured with both the application and infrastructure vendors working in cohesion, the ability for Private Cloud or service providers to predict, meet and propose application performance based service levels becomes a lot more feasible. Additionally such an approach would also be ideal for end users who just need a critical application rolled out immediately in house with minimum fuss and risk.

While there may be a number of such appliances or specialized systems that will emerge in the market for applications such as SAP HANA or Cisco Unified Communications the key is to ensure that they're standardized as well as optimised. This entails a converged infrastructure that rolls out as a single product and consequently has a single matrix upgrade for all of its component patches and firmware upgrades that subsequently also correspond with the application. Additionally it encompasses a single support model that includes not only the infrastructure but also the application. This in turn not only eliminates vendor finger pointing and prolonged troubleshooting but also acts as an assurance that responsibility of the application's performance is paramount regardless of the potential cause of the problem.

The demand for key applications to be monitored, optimised and rolled out with speed and velocity will be faced by not only Service providers and Private Cloud deployments but also internal IT departments who are struggling with their day to day firefighting exercises. To ensure success, IT admin will need a new breed of infrastructure or specialized systems that enables them to focus on delivering, optimizing and managing the application and consequently not needing to worry about the infrastructure that supports them. This is where the new Vblock specialized systems being offered by VCE come into play. Unlike other companies with huge portfolios of products, VCE have a single focal point, namely Vblocks. By now adopting that same approach of velocity that was instilled for the production of standardized Vblock models, end users can now reap the same rewards with new specialized systems that are application specific. Herein lies the key to Cloud maturity and ultimately the successful deployment of mission critical applications.

More Stories By Archie Hendryx

SAN, NAS, Back Up / Recovery & Virtualisation Specialist.

@ThingsExpo Stories
WebRTC is about the data channel as much as about video and audio conferencing. However, basically all commercial WebRTC applications have been built with a focus on audio and video. The handling of “data” has been limited to text chat and file download – all other data sharing seems to end with screensharing. What is holding back a more intensive use of peer-to-peer data? In her session at @ThingsExpo, Dr Silvia Pfeiffer, WebRTC Applications Team Lead at National ICT Australia, will look at different existing uses of peer-to-peer data sharing and how it can become useful in a live session to...
As a company adopts a DevOps approach to software development, what are key things that both the Dev and Ops side of the business must keep in mind to ensure effective continuous delivery? In his session at DevOps Summit, Mark Hydar, Head of DevOps, Ericsson TV Platforms, will share best practices and provide helpful tips for Ops teams to adopt an open line of communication with the development side of the house to ensure success between the two sides.
NHK, Japan Broadcasting, will feature the upcoming @ThingsExpo Silicon Valley in a special 'Internet of Things' and smart technology documentary that will be filmed on the expo floor between November 3 to 5, 2015, in Santa Clara. NHK is the sole public TV network in Japan equivalent to the BBC in the UK and the largest in Asia with many award-winning science and technology programs. Japanese TV is producing a documentary about IoT and Smart technology and will be covering @ThingsExpo Silicon Valley. The program, to be aired during the peak viewership season of the year, will have a major impac...
There are so many tools and techniques for data analytics that even for a data scientist the choices, possible systems, and even the types of data can be daunting. In his session at @ThingsExpo, Chris Harrold, Global CTO for Big Data Solutions for EMC Corporation, will show how to perform a simple, but meaningful analysis of social sentiment data using freely available tools that take only minutes to download and install. Participants will get the download information, scripts, and complete end-to-end walkthrough of the analysis from start to finish. Participants will also be given the pract...
SYS-CON Events announced today that IBM Cloud Data Services has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. IBM Cloud Data Services offers a portfolio of integrated, best-of-breed cloud data services for developers focused on mobile computing and analytics use cases.
The enterprise is being consumerized, and the consumer is being enterprised. Moore's Law does not matter anymore, the future belongs to business virtualization powered by invisible service architecture, powered by hyperscale and hyperconvergence, and facilitated by vertical streaming and horizontal scaling and consolidation. Both buyers and sellers want instant results, and from paperwork to paperless to mindless is the ultimate goal for any seamless transaction. The sweetest sweet spot in innovation is automation. The most painful pain point for any business is the mismatch between supplies a...
The buzz continues for cloud, data analytics and the Internet of Things (IoT) and their collective impact across all industries. But a new conversation is emerging - how do companies use industry disruption and technology enablers to lead in markets undergoing change, uncertainty and ambiguity? Organizations of all sizes need to evolve and transform, often under massive pressure, as industry lines blur and merge and traditional business models are assaulted and turned upside down. In this new data-driven world, marketplaces reign supreme while interoperability, APIs and applications deliver un...
The Internet of Things (IoT) is growing rapidly by extending current technologies, products and networks. By 2020, Cisco estimates there will be 50 billion connected devices. Gartner has forecast revenues of over $300 billion, just to IoT suppliers. Now is the time to figure out how you’ll make money – not just create innovative products. With hundreds of new products and companies jumping into the IoT fray every month, there’s no shortage of innovation. Despite this, McKinsey/VisionMobile data shows "less than 10 percent of IoT developers are making enough to support a reasonably sized team....
"Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
WebRTC has had a real tough three or four years, and so have those working with it. Only a few short years ago, the development world were excited about WebRTC and proclaiming how awesome it was. You might have played with the technology a couple of years ago, only to find the extra infrastructure requirements were painful to implement and poorly documented. This probably left a bitter taste in your mouth, especially when things went wrong.
Nowadays, a large number of sensors and devices are connected to the network. Leading-edge IoT technologies integrate various types of sensor data to create a new value for several business decision scenarios. The transparent cloud is a model of a new IoT emergence service platform. Many service providers store and access various types of sensor data in order to create and find out new business values by integrating such data.
The broad selection of hardware, the rapid evolution of operating systems and the time-to-market for mobile apps has been so rapid that new challenges for developers and engineers arise every day. Security, testing, hosting, and other metrics have to be considered through the process. In his session at Big Data Expo, Walter Maguire, Chief Field Technologist, HP Big Data Group, at Hewlett-Packard, will discuss the challenges faced by developers and a composite Big Data applications builder, focusing on how to help solve the problems that developers are continuously battling.
WebRTC converts the entire network into a ubiquitous communications cloud thereby connecting anytime, anywhere through any point. In his session at WebRTC Summit,, Mark Castleman, EIR at Bell Labs and Head of Future X Labs, will discuss how the transformational nature of communications is achieved through the democratizing force of WebRTC. WebRTC is doing for voice what HTML did for web content.
Who are you? How do you introduce yourself? Do you use a name, or do you greet a friend by the last four digits of his social security number? Assuming you don’t, why are we content to associate our identity with 10 random digits assigned by our phone company? Identity is an issue that affects everyone, but as individuals we don’t spend a lot of time thinking about it. In his session at @ThingsExpo, Ben Klang, Founder & President of Mojo Lingo, will discuss the impact of technology on identity. Should we federate, or not? How should identity be secured? Who owns the identity? How is identity ...
Developing software for the Internet of Things (IoT) comes with its own set of challenges. Security, privacy, and unified standards are a few key issues. In addition, each IoT product is comprised of at least three separate application components: the software embedded in the device, the backend big-data service, and the mobile application for the end user's controls. Each component is developed by a different team, using different technologies and practices, and deployed to a different stack/target - this makes the integration of these separate pipelines and the coordination of software upd...
WebRTC services have already permeated corporate communications in the form of videoconferencing solutions. However, WebRTC has the potential of going beyond and catalyzing a new class of services providing more than calls with capabilities such as mass-scale real-time media broadcasting, enriched and augmented video, person-to-machine and machine-to-machine communications. In his session at @ThingsExpo, Luis Lopez, CEO of Kurento, will introduce the technologies required for implementing these ideas and some early experiments performed in the Kurento open source software community in areas ...
WebRTC: together these advances have created a perfect storm of technologies that are disrupting and transforming classic communications models and ecosystems. In his session at WebRTC Summit, Cary Bran, VP of Innovation and New Ventures at Plantronics and PLT Labs, will provide an overview of this technological shift, including associated business and consumer communications impacts, and opportunities it may enable, complement or entirely transform.
“In the past year we've seen a lot of stabilization of WebRTC. You can now use it in production with a far greater degree of certainty. A lot of the real developments in the past year have been in things like the data channel, which will enable a whole new type of application," explained Peter Dunkley, Technical Director at Acision, in this interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Luxoft Holding, Inc., a leading provider of software development services and innovative IT solutions, has been named “Bronze Sponsor” of SYS-CON's @ThingsExpo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Luxoft’s software development services consist of core and mission-critical custom software development and support, product engineering and testing, and technology consulting.
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet conditions, Dyn ensures traffic gets delivered faster, safer, and more reliably than ever.