Welcome!

Open Source Cloud Authors: Yeshim Deniz, Liz McMillan, Stackify Blog, Vaibhaw Pandey, Pat Romanski

Related Topics: Open Source Cloud, Linux Containers, Eclipse, Server Monitoring, Apache

Open Source Cloud: Article

Making Open Source Software

The world continues to embrace and adopt free and open source licensed software across the board

Software is surprisingly dynamic.  All software evolves.  Bugs are found and fixed.  Enhancements added.  New requirements are discovered in using the software.  New uses are found for it and it is shaped to those new uses.  Software solutions that are useful and used must by their very existence evolve.   Well organized open source software communities create the right conditions to make this dynamism successful.

The world continues to embrace and adopt free and open source licensed software across the board.  Vendors and OEMs, their IT customers, governments and academics are all using, buying and making open source software, and often all three at once.

Using and buying liberally licensed open source software, i.e., consuming such software, are relatively straight forward affairs. You buy a product based on open source licensed software pretty much the way you buy other software, evaluating the company producing the products and services against your own IT requirements and managed procurement risk profiles.  You don't procure Red Hat Linux server software differently than you historically bought Solaris or might buy Microsoft Windows Server systems.

Using open source software (as opposed to buying a product) adds additional considerations based on evaluating the strength of the community around the open source project and the costs of supporting that choice either through the development of in-house expertise (likely supported by joining the project's community) or the hiring of external expertise. You look at a project's how-to documentation and tutorials, forum and email list activity, and IRC channels.  You consider the availability of contracting support from other knowledgeable sources around the community.  These considerations really don't change whether the open source software to be used is tools and infrastructure systems or developer libraries and frameworks.  These considerations scale with use from individuals and the amount of time they have to spend solving their problem all the way up through company IT departments wanting to use open source licensed software and the time and money trade-offs they're willing to make.

Once one starts to make open source software, i.e. producing it, a different set of considerations arise.  There are really two scenarios for producing open source:

  • One can contribute to an existing project, adding value through bug fixes and new functionality (and possibly non-software contributions like documentation and translations).
  • One can start a new open source project, which means organizing the infrastructure, developing the initial software, and providing for the early community.

The motivation in the first case of contributing to an existing open source project is simple.  People generally start using open source software before they become contributors.  People use software because it solves a problem they have.   Once they use the software for a while, they will generally encounter a bug, find a change they want to make, or possibly document a new use case.  If the user is comfortable with making software changes and the project community has done a good job of making it easy to contribute, then contributions can happen.

While it would be easy to simply make the necessary change and ignore the contribution, living on a personal forked copy of the software comes at a cost.  Others' enhancements and bug fixes aren't seen and shared by installing newer versions of the software, and one needs to re-patch the software with one's own changes and fixes if one does try to move to a newer version.  It is far better to contribute one's changes back to the project community if feasible, working with the committers to ensure its contributed correctly and patched into the main development tree. The onus is on the community to make it easy to contribute, but it's on the contributor to contribute correctly.  The cost of living on a fork gets worse over time as the forked branch drifts further away from the mainline development of the project.  It is well worth the investment to contribute.

This brings us to the "making" open source software case of starting one's own project.

First, it all starts with software.  You must consider the software itself around which a project and its community is to be built.  The software must "do" something useful from the beginning.  Open source software developer communities are predominantly a discussion that starts with code, and without the code their is no discussion.  Even when a fledgling community comes together to discuss a problem first with an eye to building the solution together, sooner or later someone needs to commit to writing the first workable building software that will act as a centre of gravity for all other conversations.

If an existing body of software is to be published into an open source community then there needs to be certain considerations with respect to the ownership and licensing.  Software is covered by digital copyright and someone owns that copyright.  To publish existing software requires the owners to agree to the publication and licensing as open source.  The weight of existing code and its cultural history need to be considered, and may effect the early project community.

The crucial question becomes "why" open source?  What motivates the publication of software under an open source license?  Why share the software?  Why choose NOT to commercialize it.  (There are a number of important reasons not to commercialize or keep the software proprietary.)

The economics of collaboratively developing software is compelling.  Writing good software is hard work.  Managing the evolution of software over time is equally hard work.  Sharing good software and collaboratively developing and maintaining it distributes the costs across a group.  Publishing the software as open source, and building a development community (however small) is motivated by a desire to evolve the software and share the value and to be open to the idea that others in the community will join in sharing their domain expertise, learning the software's structure, and sharing the costs of evolution.

The economics are also asymmetric.  For a contributor the contribution may represent a small bit of expertise from the contributor (e.g. a single bug fix or particular application of an algorithm that they personally understood), but the contributor is rewarded with the community investment of the entire package of software at relatively small personal cost.  Likewise, the contribution is valuable to the software's developer (and user) community at large without necessarily carrying the costs of the contributor as a full time member in the developer community.  (Indeed a single contribution may have been the only value the contributor had to give in this instance.)

Motivation to develop an open source community to evolve the software is an essential factor, but so too are knowledge of the problem domain, and the internal knowledge of the software needed to anchor the community.  The essential motivation to share the software as open source supports the commitment and investment to maintain enough domain expertise and software knowledge to keep the community going and growing.  Without all three factors it is difficult for the community to evolve the software and thrive.

One of the first structural considerations needs to be which open source software license to attach to the project.  There are an array of licenses that have been approved by the Open Source Initiative as supporting the Open Source Definition, but there are really just a few that typically need consideration, and we'll discuss those at length in another post.  The important thing to realize when choosing a license is that it doesn't just outline the legal responsibilities for how the software is shared, but it also outlines the social contract for how the community will share.

The next structural consideration for a community is to choose a tool platform to support collaborative development. This is the hub for activity for managing source code versions, distributing built software, handling the lines of communications, and logging issues and bugs. There are a number of free forge sites (e.g., Codeplex, Google Code, GitHub, SourceForge), and the tools all exist as open source themselves if a project wanted to develop and manage its own site.

The last structural consideration involves deciding what sort of community one wants to develop.  What sort of governance will be required and when will certain things need to be instituted.  There are two very good books available in this space:

Contribution is the life blood of an open source software community.  It leads to new developers joining the project and learning enough to becoming committers with the responsibility for the code base and its builds.  Its what makes the shared economic cost work for all.  But as already stated, contributors generally start as users of the software.  This means that a project community hoping to attract contributors first needs to attract users.  The project's initial participants need to build a solid onramp for users that can then become contributors by making the software easy to "use", ensuring it's discoverable, downloadable, easily installable, and quickly configurable.

Not all users will contribute.  Some may never push the software enough to need to make a change.  It simply solves the problems they need to solve.  Of those that contribute, some will contribute in very simple ways, reporting bugs for particular use cases.   Others may contribute more, and this is where the second onramp needs to be developed by the community.  Contributors need to know what sorts of contributions are encouraged, how to contribute, and where to contribute.  If code contributions are to be encouraged, having scripts and notes on building the software and testing the baseline build make it easy for potential contributing developers to get involved.

So building an open source software project follows a pattern:

  • There needs to be useful software, at least a seed around which to build a community.
  • Motivation to share, expertise in the problem to be solved, and an understanding of the software structure will anchor an open source community. The project founder is the starting point for what will hopefully become a community.
  • The project needs to have the structural issues of license, forge, and governance sorted, even if governance becomes an evolving discussion in a growing community.
  • The community needs to build a solid onramp for users, and a second onramp for contributors.  The sooner this happens in a project's life, the faster it can build a community.

One can choose to publish software under an open source license and never build a community.  The software isn't "lost", but neither is it hardened or evolved.  It may be useful to someone that discovers it, but the dynamic aspects of software development are lost to it.  Taking the steps to encourage and build a community around the open source project sets the dynamic software engine in motion and allows the economics of collaborative development and sharing to work at its best.

More Stories By Stephen Walli

Stephen Walli has worked in the IT industry since 1980 as both customer and vendor. He is presently the technical director for the Outercurve Foundation.

Prior to this, he consulted on software business development and open source strategy, often working with partners like Initmarketing and InteropSystems. He organized the agenda, speakers and sponsors for the inaugural Beijing Open Source Software Forum as part of the 2007 Software Innovation Summit in Beijing. The development of the Chinese software market is an area of deep interest for him. He is a board director at eBox, and an advisor at Bitrock, Continuent, Ohloh (acquired by SourceForge in 2009), and TargetSource (each of which represents unique opportunities in the FOSS world). He was also the open-source-strategist-in-residence for Open Tuesday in Finland.

Stephen was Vice-president, Open Source Development Strategy at Optaros, Inc. through its initial 19 months. Prior to that he was a business development manager in the Windows Platform team at Microsoft working on community development, standards, and intellectual property concerns.

@ThingsExpo Stories
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settle...
Imagine if you will, a retail floor so densely packed with sensors that they can pick up the movements of insects scurrying across a store aisle. Or a component of a piece of factory equipment so well-instrumented that its digital twin provides resolution down to the micrometer.
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
Product connectivity goes hand and hand these days with increased use of personal data. New IoT devices are becoming more personalized than ever before. In his session at 22nd Cloud Expo | DXWorld Expo, Nicolas Fierro, CEO of MIMIR Blockchain Solutions, will discuss how in order to protect your data and privacy, IoT applications need to embrace Blockchain technology for a new level of product security never before seen - or needed.
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
We are given a desktop platform with Java 8 or Java 9 installed and seek to find a way to deploy high-performance Java applications that use Java 3D and/or Jogl without having to run an installer. We are subject to the constraint that the applications be signed and deployed so that they can be run in a trusted environment (i.e., outside of the sandbox). Further, we seek to do this in a way that does not depend on bundling a JRE with our applications, as this makes downloads and installations rat...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
In this strange new world where more and more power is drawn from business technology, companies are effectively straddling two paths on the road to innovation and transformation into digital enterprises. The first path is the heritage trail – with “legacy” technology forming the background. Here, extant technologies are transformed by core IT teams to provide more API-driven approaches. Legacy systems can restrict companies that are transitioning into digital enterprises. To truly become a lead...
Digital Transformation (DX) is not a "one-size-fits all" strategy. Each organization needs to develop its own unique, long-term DX plan. It must do so by realizing that we now live in a data-driven age, and that technologies such as Cloud Computing, Big Data, the IoT, Cognitive Computing, and Blockchain are only tools. In her general session at 21st Cloud Expo, Rebecca Wanta explained how the strategy must focus on DX and include a commitment from top management to create great IT jobs, monitor ...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
The IoT Will Grow: In what might be the most obvious prediction of the decade, the IoT will continue to expand next year, with more and more devices coming online every single day. What isn’t so obvious about this prediction: where that growth will occur. The retail, healthcare, and industrial/supply chain industries will likely see the greatest growth. Forrester Research has predicted the IoT will become “the backbone” of customer value as it continues to grow. It is no surprise that retail is ...
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...