Welcome!

Open Source Cloud Authors: Elizabeth White, William Schmarzo, Rishi Bhargava, Stefan Bernbo, Liz McMillan

Blog Feed Post

What networking can learn from CPUs

The rapid growth in compute demand is well understood. To keep up with accelerating requirements, CPUs have gone through a massive transformation over the years. Starting with relatively low-capacity CPUs, the expansion of capability to what is available today has certainly been remarkable – enough to satisfy even Gordon Moore. But keeping up with demand was not a matter of simply making bigger and faster chips. To get more capacity, we actually went smaller.

As it turns out, there are practical limitations to just scaling things larger. To get more capacity out of individual CPUs, we went from large single cores to multi-core processors. This obviously required a change in applications to take advantage of multiple cores. The result is a distributed architecture and the proliferation of “scale out” as a buzzword in our industry.

From an application perspective, the trend continues. Applications that require performance continue to move to multi-tiered applications that are distributed across a number of VMs. This is true for massive web-scale applications like Facebook, but also for other applications like MapReduce.

To get bigger, we get smaller

The technology trend is clear: to get more output, move to smaller blocks of capacity, and coordinate workloads across that capacity.

If this is true, then the future will be lots of small pools of resources that rely on the network for interconnectivity. As applications become more distributed, then performance between these pools becomes even more critical. Even small amounts of pool-to-pool latency can aggregate up into significant impacts, either because of interesting failure conditions with asynchronous operations or because of the cumulative performance impact.

As interconnectivity takes a larger role, we should expect the discussion of commoditization of network resources to expand. Today, there is a strong argument around commoditizing the switch hardware (largely via merchant silicon) and the switch operating system (through players like Cumulus, Big Switch, and Pica8). But massive distribution will require both a commoditized interconnect and a commoditized orchestration platform.

On the latter, it would seem that OpenDaylight is poised to lead the charge. With an industry-backed open source solution, it will be difficult to justify premium control products, which should be sufficient in driving that aspect of the solution towards commodity. But that still leaves the interconnect piece unaccounted for.

Getting to a cheaper interconnect

There is probably a case to be made for leaf-spine architectures here, but if the number of servers continues to expand, there are some ugly economics at play. Scaling out in a leaf-spine architecture requires scaling up at the same time. As the interconnect demands increase, the number of spine switches increases. You eventually get into spines of spines, which starts to look an awful like like traditional three-tier architectures.

The sheer number of devices and cables drive the cost unfavorably. And when you consider the long-term operational costs tied to power, cooling, space, and management, it’s unclear where the budgetary breaking point is. Beyond just the costs, the other issue here is that every time a new layer is added, you add a couple of more fabric switch hops. If application performance is based on both capacity and latency, then every time you add switch hops, you incur a potentially heavy performance penalty.

At some point, you need to move away from multi-hop connectivity through the fabric.

Moving away from multi-hop fabrics

Instinctively, we already know this. There is already a tendency to rack gear up in close proximity to other gear to which it is tied. You might, for example, balance Hadoop loads across a number of servers that are in the same rack. Essentially, what we are doing in these cases is acknowledging that proximity matters, and we are statically designing for it.

But what happens when things aren’t static?

In a datacenter where applications are portable across servers, the network capacity cannot be statically planned. And as application requirements change (often dynamically as load changes), then the network capacity demands will also change. This requires an interconnect that is both high in capacity and dynamic.

This problem is slightly different than the compute problem. On the compute side, it was enough to free up resources (or create additional ones) and then move the application to the resource. In this case, the application is fixed, which means the capacity has to move to the application. When capacity is statically allocated, this poses a problem.

The bottom line

The only solutions here are to either over provision everything, or move towards a dynamic interconnect. The first is counter to the trends we learn from compute – make things smaller and more distributed. In this case you get out of the problem by paying for it. The question is whether this flies in the face of all the commoditization trends. What good is commoditizing something if the end solution requires buying a ton more? You would have to see cost declines match capacity increases, but this seems unlikely as there is no upper limit for capacity whereas cost will asymptotically approach some profit threshold.

If the trends in compute and storage hold true for networking, then the current trajectory of some networking solutions will need to change. Learning from the past is a great way to shape the future.

[Today’s fun fact: Lobster was one of the main entrees at the first Thanksgiving dinner. They also had Cheddar Bay Biscuits I think.]

The post What networking can learn from CPUs appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

@ThingsExpo Stories
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
@GonzalezCarmen has been ranked the Number One Influencer and @ThingsExpo has been named the Number One Brand in the “M2M 2016: Top 100 Influencers and Brands” by Onalytica. Onalytica analyzed tweets over the last 6 months mentioning the keywords M2M OR “Machine to Machine.” They then identified the top 100 most influential brands and individuals leading the discussion on Twitter.
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
IoT is rapidly changing the way enterprises are using data to improve business decision-making. In order to derive business value, organizations must unlock insights from the data gathered and then act on these. In their session at @ThingsExpo, Eric Hoffman, Vice President at EastBanc Technologies, and Peter Shashkin, Head of Development Department at EastBanc Technologies, discussed how one organization leveraged IoT, cloud technology and data analysis to improve customer experiences and effici...
As data explodes in quantity, importance and from new sources, the need for managing and protecting data residing across physical, virtual, and cloud environments grow with it. Managing data includes protecting it, indexing and classifying it for true, long-term management, compliance and E-Discovery. Commvault can ensure this with a single pane of glass solution – whether in a private cloud, a Service Provider delivered public cloud or a hybrid cloud environment – across the heterogeneous enter...
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, discussed the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
Successful digital transformation requires new organizational competencies and capabilities. Research tells us that the biggest impediment to successful transformation is human; consequently, the biggest enabler is a properly skilled and empowered workforce. In the digital age, new individual and collective competencies are required. In his session at 19th Cloud Expo, Bob Newhouse, CEO and founder of Agilitiv, drew together recent research and lessons learned from emerging and established compa...
"IoT is going to be a huge industry with a lot of value for end users, for industries, for consumers, for manufacturers. How can we use cloud to effectively manage IoT applications," stated Ian Khan, Innovation & Marketing Manager at Solgeniakhela, in this SYS-CON.tv interview at @ThingsExpo, held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
Whether your IoT service is connecting cars, homes, appliances, wearable, cameras or other devices, one question hangs in the balance – how do you actually make money from this service? The ability to turn your IoT service into profit requires the ability to create a monetization strategy that is flexible, scalable and working for you in real-time. It must be a transparent, smoothly implemented strategy that all stakeholders – from customers to the board – will be able to understand and comprehe...
Extracting business value from Internet of Things (IoT) data doesn’t happen overnight. There are several requirements that must be satisfied, including IoT device enablement, data analysis, real-time detection of complex events and automated orchestration of actions. Unfortunately, too many companies fall short in achieving their business goals by implementing incomplete solutions or not focusing on tangible use cases. In his general session at @ThingsExpo, Dave McCarthy, Director of Products...
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and...
Everyone knows that truly innovative companies learn as they go along, pushing boundaries in response to market changes and demands. What's more of a mystery is how to balance innovation on a fresh platform built from scratch with the legacy tech stack, product suite and customers that continue to serve as the business' foundation. In his General Session at 19th Cloud Expo, Michael Chambliss, Head of Engineering at ReadyTalk, discussed why and how ReadyTalk diverted from healthy revenue and mor...
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
Bert Loomis was a visionary. This general session will highlight how Bert Loomis and people like him inspire us to build great things with small inventions. In their general session at 19th Cloud Expo, Harold Hannon, Architect at IBM Bluemix, and Michael O'Neill, Strategic Business Development at Nvidia, discussed the accelerating pace of AI development and how IBM Cloud and NVIDIA are partnering to bring AI capabilities to "every day," on-demand. They also reviewed two "free infrastructure" pr...
Major trends and emerging technologies – from virtual reality and IoT, to Big Data and algorithms – are helping organizations innovate in the digital era. However, to create real business value, IT must think beyond the ‘what’ of digital transformation to the ‘how’ to harness emerging trends, innovation and disruption. Architecture is the key that underpins and ties all these efforts together. In the digital age, it’s important to invest in architecture, extend the enterprise footprint to the cl...
Businesses and business units of all sizes can benefit from cloud computing, but many don't want the cost, performance and security concerns of public cloud nor the complexity of building their own private clouds. Today, some cloud vendors are using artificial intelligence (AI) to simplify cloud deployment and management. In his session at 20th Cloud Expo, Ajay Gulati, Co-founder and CEO of ZeroStack, will discuss how AI can simplify cloud operations. He will cover the following topics: why clou...
"Dice has been around for the last 20 years. We have been helping tech professionals find new jobs and career opportunities," explained Manish Dixit, VP of Product and Engineering at Dice, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
More and more brands have jumped on the IoT bandwagon. We have an excess of wearables – activity trackers, smartwatches, smart glasses and sneakers, and more that track seemingly endless datapoints. However, most consumers have no idea what “IoT” means. Creating more wearables that track data shouldn't be the aim of brands; delivering meaningful, tangible relevance to their users should be. We're in a period in which the IoT pendulum is still swinging. Initially, it swung toward "smart for smar...