Welcome!

Open Source Cloud Authors: Zakia Bouachraoui, Yeshim Deniz, Elizabeth White, Pat Romanski, Liz McMillan

Related Topics: Open Source Cloud

Open Source Cloud: Article

Achieving High Performance at Low Cost

The Dual Core Commodity Cluster Advantage

Advances in clustering technology have redefined the price-to-performance curve for many High Performance Computing (HPC) application areas. The use of specialized high-speed interconnects and fast commodity processors have pushed the envelope to where it is today.

Not all applications need this level of hardware (and cost) to achieve leading-edge price to performance. Indeed, there have been several technological advances that may invite a step back from the traditional edge-of-technology approach to HPC clustering. These advances include the following developments:

  • Introduction of low-cost high-performance multi-core processors
  • Introduction of high-density motherboards and packaging solutions
  • Introduction of optimized Linux Gigabit Ethernet performance
Of particular importance is the fact that these advances are in the commodity sector where high demand and economies of scale have created reasonable price points. Furthermore, alternative high-cost approaches that employ enhanced interconnects and multi-socket motherboards may not be required for certain application classes. Users in this category can expect the commodity approach to deliver new levels of industry-leading price to performance.

In this article, we will discuss how these advances can be used to optimize cluster performance. In addition, we will highlight application areas where these types of clusters are expected to provide optimum performance.

Gaining the Multi-Core Advantage
The multi-core revolution is here. All major processor families have begun using multiple CPU cores to enhance performance. Currently, dual core processors are available at various performance and price levels. Remarkably, most HPC users can immediately benefit from these advances, as most HPC cluster software is designed to use multiple processors.

Specifically, current dual core designs allow HPC users to effectively double the number of processing units while still enjoying traditional commodity price points. In the HPC market, more CPUs are always welcome, but the right design (choice of processor, motherboard, and packaging) is critical to achieving the desired performance.

The recently introduced Pentium D (Presler) processor and Xeon 3000 processers from Intel are examples of commodity high-performance processors. The Presler series is a dual core processor manufactured using the latest 65nm process and is currently available at speeds up to 3.40 GHz. More important for HPC users is that each Presler has a total of 4MB of on-chip cache that it divides evenly between the two cores (2MB each). These caches are fed using an 800MHz FSB and DDR2 memory.

In the HPC cluster sector, the processor battle has typically been between the high-end Intel Xeon or the AMD Opteron. Little consideration has been given to "lesser processors" in the HPC space. As this report will show, this assumption may not hold when actual price and performance numbers are determined.

Check the Numbers - Presler Is on Top
The SPEC benchmarks are usually a good rating of overall processors performance. Table 1 shows the SPEC benchmarks for an Intel Pentium D (model 940) and an AMD Opteron (model 270). Pentium D 940 performs at a level 10% greater than the Opteron 270, yet at this point in time, it's priced at half the cost of an Opteron 270.

While the SPEC benchmarks are an important yardstick, real application benchmarks often provide a second data point with which to compare processors. The GROMACS molecular dynamics package is known to push processors very hard and is therefore a good test of overall number-crunching capability. The results shown in Table 2 are for the Gromacs Benchmark Suite (Linux Version 3.3). See the references at the end of this article for more information on GROMACS. All results are normalized to the Pentium D (lower means slower) and were run using one processor.

The results show a substantial performance advantage over the Opteron 270 processor. The Opteron 270 numbers were taken from the Gromacs Web site (www.gromacs.org).

Breakthrough Design - The Caretta Motherboard
When designing clusters, the "more is better" model often works. However, the number of processors (and hence cores) that can be placed on a motherboard needs to be considered carefully. Modern cluster designs currently take advantage of dual socket motherboards and single core processors. While this approach has helped improve processor density, extending this design with dual cores may have some unexpected results. Using dual core processors on dual socket motherboards requires that the memory subsystems and interconnect now service four cores (instead of two) at the same time.

This situation can, in certain cases, seriously degrade the maximum achievable performance of each core. Optimizing onboard memory subsystems is one way to mitigate memory contention, but this approach also introduces a "nonlocal" or NUMA (Non-uniform Memory Access) type of memory structure. In the end, the application determines the best approach, but rethinking the dense core motherboard approach may have some advantages.

A potentially more optimal solution for many applications would be a small single socket motherboard on which a dual core processor can reside. Such a system would resemble current dual socket motherboards/single core clusters designs in use today, on which memory and interconnect contention is well understood.

The recent introduction of the Intel Caretta motherboard (S3000PT) has been designed to fill this need. The Caretta motherboard supports the Intel Xeon 3000, Pentium D, and Pentium 4 processors, four DIMM slots (DDR2 533/667 with ECC, two-way interleaved, unbuffered), Integrated two port SATA 3.0Gb/s with RAID 0 &1, an ATI ES1000 (16MB), Dual Gigabit Ethernet LAN, and a 5.95 inch x13 inch Form Factor. Interestingly, the Form Factor is one half the size of an Extended ATX motherboard (12"x13"). These dimensions allow a standard Rack Mount ATX enclosure to hold two Caretta motherboards.

The Caretta allows the density found on dualcore/dualsocket motherboards, but provides each processor with its own local memory environment.

This approach has further advantages as well. As more cores/processors are placed on the motherboard, a node failure (motherboard/power supply/hard drive) removes all the cores/processors from the cluster. By using a separate motherboard for each processor, failures are limited to two cores (one processor).

The HyperBlade Advantage
When deploying a high density production HPC cluster, correct system packaging will ensure continuous operation. While many users find utility in deploying 1U server packaging solutions, blade systems are designed with a higher level of custom integration. Blades are typically easier to manage, but more expensive that 1U servers. A hybrid solution where commodity components can be packaged in a blade-like fashion has been developed by Appro International. The advantages of this solution include the use of commodity components inside the "blade" and the integration and manageability of bladed systems.

Like blades, the Appro HyperBlades are modular servers plugged into a common backplane that eliminates cable clutter. By using a vertical mount approach, the Appro HyperBlade offers an enhanced density, providing up to 50 servers in a standard 42U rack cabinet. Large and smaller rack systems are available as well. Because the HyperBlade is designed to include the flexibility of a typical 1U server, high speed interconnects options - including Myrinet, Dolphin, Quadrics and InfiniBand™ - are easily deployed.

In addition, HyperBlades offer the power advantage of 1U servers while, at the same time, they provide an integrated power control and serial management capability found in more expensive blade systems. Finally, each Appro HyperBlade can hold two Caretta motherboards, thus providing excellent processor density (four cores per HyperBlade) and easy management.

The Gigabit Ethernet Advantage
While there are many choices for cluster interconnects, the preferred and lowest cost option is Gigabit Ethernet. While often dismissed as underpowered for today's clusters, actual tests show the exact opposite is true for some application classes. Shown in Figure 1 is a NetPIPE TCP throughput graph for a Gigabit Ethernet link between two Pentium D 940 processors.

The connection used an onboard Intel 82573 chipset, an e1000 driver, and a 1500 byte MTU. It should be noted that the single byte latency was 36 microseconds. Not all applications can scale well with this level of performance. However, there are many that will find commodity Gigabit Ethernet more than adequate for their computing needs.


More Stories By Douglas Eadline

Dr. Douglas Eadline has over 25 years of experience in high-performance computing. You can contact him through Basement Supercomputing (http://basement-supercomputing.com).

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
Chris Matthieu is the President & CEO of Computes, inc. He brings 30 years of experience in development and launches of disruptive technologies to create new market opportunities as well as enhance enterprise product portfolios with emerging technologies. His most recent venture was Octoblu, a cross-protocol Internet of Things (IoT) mesh network platform, acquired by Citrix. Prior to co-founding Octoblu, Chris was founder of Nodester, an open-source Node.JS PaaS which was acquired by AppFog and ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...