Welcome!

Open Source Cloud Authors: Pat Romanski, Elizabeth White, Zakia Bouachraoui, Liz McMillan, Yeshim Deniz

Related Topics: Open Source Cloud, Linux Containers, Eclipse, @CloudExpo

Open Source Cloud: Article

A Flexible and Interoperable Cloud Operating System

The foundation of an open cloud ecosystem

OpenNebula Cloud on Ulitzer

Future enterprise data centers will look like private clouds supporting a flexible and agile execution of virtualized services, and combining local with public cloud-based infrastructure to enable highly scalable hosting environments.

The key component in these cloud architectures will the cloud management system, also called cloud operating system (OS), being responsible for the secure, efficient and scalable management of the cloud resources. Cloud OS are displacing "traditional" OS, which will be part of the application stack.

Flexibility in Cloud Operating Systems
A Cloud OS administers the complexity of a distributed infrastructure in the execution of virtualized service workloads.

The Cloud OS manages a number of servers and hardware devices and their infrastructure services which make up a cloud system, giving the user the impression that they are interacting with a single infinite capacity and elastic cloud.

Slide1

In the same way that multi-threaded OS define the thread as the unit of execution and the multi-threaded application as the management entity, supporting communication and synchronization instruments; multi-tier Cloud OS define the VM as the basic execution unit and the multi-tier virtualized service (group of VMs) as the basic management entity, supporting different communication instruments and their auto-configuration at boot time.

This concept helps to create scalable applications because you can add VMs as and when needed. Individual multi-tier applications are all isolated from each other, but individual VMs in the same application are not as they all may share a communication network and services as and when needed.

A Cloud OS has a number of functions:

  • Management of the Network, Computing and Storage Capacity: Orchestration of storage, network and virtualization technologies to enable the dynamic placement of the multi-tier services on distributed infrastructures
  • Management of VM Life-cycle: Smooth execution of VMs by allocating the resources required for them to operate and by offering the functionality required to implement VM placement policies
  • Management of Workload Placement: Support for the definition of workload and resource-aware allocation policies such as consolidation for energy efficiency, load  balancing, affinity-aware, capacity reservation…
  • Management of VM Images: Exposing of general mechanisms to transfer and clone VM images
  • Management of Information and Accounting. Provision of indicators that can be used to diagnose the correct operation of the servers and VMs and to support the implementation of the dynamic VM placement policies
  • Management of Security: Definition of security policy on the users of the system, guaranteeing that the resources are used only by users with the relevant authorizations and isolation between workloads
  • Management of Remote Cloud Capacity: Dynamic extension of local capacity with resources from remote providers

OpenNebula is an open cloud OS that provides the above functionality on a wide range of technologies. However, in my view, the main differentiation of OpenNebula is not its leading edge functionality but its open, modular and extensible architecture that enables its seamless integration with any service and component in the ecosystem. The open architecture of OpenNebula provides the flexibility that many enterprise IT shops need for internal cloud adoption. Cloud computing is about integration, one solution does not fit all. Moreover, as pointed out in the CloudScaling "Infrastructure-as-a-Service Builder’s Guide", the right configuration and components in a Cloud architecture also depend on the execution requirements of the service workload.

Interoperability at the Cloud Management Level
The IEEE defines interoperability as "the ability of two or more systems or components to exchange information and to use the information that has been exchanged" and Wikipedia introduces  interoperability as "the property referring to the ability of diverse systems and organizations to work together (inter-operate)". Being the core component in any cloud solution, interoperability is crucial for the success of a cloud management system. We can compare the cloud OS with a the kernel in "traditional" operating systems. The cloud OS represents the basic functions in a cloud and requires a well defined communication with underlying devices and interface to expose administration and user functionality.

At the cloud management level, interoperability means:

  • Modularity and flexibility to easily interface with any service or technology in the virtualization and cloud ecosystem, and
  • Standardization to avoid vendor lock-in and to create a healthy community around

In fact interoperability should be evaluated from three different angles:

  • Infrastructure User Perspective: Users, application developers, integrators and aggregators are requiring a standard interface for the management of virtual machines, network and storage. OCCI is a simple REST API for Infrastructure as a Service based Clouds that is being defined in the context of OGF. This interfaces represents the first standard specification for life-cycle management of virtualized resources. OpenNebula has been the first referent implementation of this open cloud interface, and also implement the Amazon EC2 API.
  • Infrastructure Management Perspective: Administrators are requiring cloud OS to inteface into existing infrastructure and management services, so fitting into any data center. OpenNebula provides a flexible back-end that can be integrated with any service for virtualization, storage and networking.
  • Infrastructure Federation Perspective: Administrators are requiring cloud OS to manage resources from partner and commercial clouds

With high-end computing demands, cloud operating systems will continue to be a very active field of research and development. An open and flexible approach for cloud management ensures uptake and simplifies adaptation to different environments, being key for interoperability. The existence of an open and standard-based cloud management system like OpenNebula provides the foundation for building a complete cloud ecosystem, ensuring the new components and services in the ecosystem to have the widest possible market and user acceptability.

OpenNebula is being enhanced in the context of the RESERVOIR project, flagship of European research initiatives in virtualized infrastructures and cloud computing.

More Stories By Ignacio M. Llorente

Dr. Llorente is Director of the OpenNebula Project and CEO & co-founder at C12G Labs. He is an entrepreneur and researcher in the field of cloud and distributed computing, having managed several international projects and initiatives on Cloud Computing, and authored many articles in the leading journals and proceedings books. Dr. Llorente is one of the pioneers and world's leading authorities on Cloud Computing. He has held several appointments as independent expert and consultant for the European Commission and several companies and national governments. He has given many keynotes and invited talks in the main international events in cloud computing, has served on several Groups of Experts on Cloud Computing convened by international organizations, such as the European Commission and the World Economic Forum, and has contributed to several Cloud Computing panels and roadmaps. He founded and co-chaired the Open Grid Forum Working Group on Open Cloud Computing Interface, and has participated in the main European projects in Cloud Computing. Llorente holds a Ph.D in Computer Science (UCM) and an Executive MBA (IE Business School), and is a Full Professor (Catedratico) and the Head of the Distributed Systems Architecture Group at UCM.

IoT & Smart Cities Stories
While the focus and objectives of IoT initiatives are many and diverse, they all share a few common attributes, and one of those is the network. Commonly, that network includes the Internet, over which there isn't any real control for performance and availability. Or is there? The current state of the art for Big Data analytics, as applied to network telemetry, offers new opportunities for improving and assuring operational integrity. In his session at @ThingsExpo, Jim Frey, Vice President of S...
@CloudEXPO and @ExpoDX, two of the most influential technology events in the world, have hosted hundreds of sponsors and exhibitors since our launch 10 years ago. @CloudEXPO and @ExpoDX New York and Silicon Valley provide a full year of face-to-face marketing opportunities for your company. Each sponsorship and exhibit package comes with pre and post-show marketing programs. By sponsoring and exhibiting in New York and Silicon Valley, you reach a full complement of decision makers and buyers in ...
The Internet of Things is clearly many things: data collection and analytics, wearables, Smart Grids and Smart Cities, the Industrial Internet, and more. Cool platforms like Arduino, Raspberry Pi, Intel's Galileo and Edison, and a diverse world of sensors are making the IoT a great toy box for developers in all these areas. In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists discussed what things are the most important, which will have the most profound e...
Two weeks ago (November 3-5), I attended the Cloud Expo Silicon Valley as a speaker, where I presented on the security and privacy due diligence requirements for cloud solutions. Cloud security is a topical issue for every CIO, CISO, and technology buyer. Decision-makers are always looking for insights on how to mitigate the security risks of implementing and using cloud solutions. Based on the presentation topics covered at the conference, as well as the general discussions heard between sessio...
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
Rodrigo Coutinho is part of OutSystems' founders' team and currently the Head of Product Design. He provides a cross-functional role where he supports Product Management in defining the positioning and direction of the Agile Platform, while at the same time promoting model-based development and new techniques to deliver applications in the cloud.
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settl...
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
LogRocket helps product teams develop better experiences for users by recording videos of user sessions with logs and network data. It identifies UX problems and reveals the root cause of every bug. LogRocket presents impactful errors on a website, and how to reproduce it. With LogRocket, users can replay problems.
Data Theorem is a leading provider of modern application security. Its core mission is to analyze and secure any modern application anytime, anywhere. The Data Theorem Analyzer Engine continuously scans APIs and mobile applications in search of security flaws and data privacy gaps. Data Theorem products help organizations build safer applications that maximize data security and brand protection. The company has detected more than 300 million application eavesdropping incidents and currently secu...