Welcome!

Open Source Cloud Authors: Carmen Gonzalez, Liz McMillan, Zakia Bouachraoui, William Schmarzo, Elizabeth White

Related Topics: Open Source Cloud

Open Source Cloud: Article

Achieving High Performance at Low Cost

The Dual Core Commodity Cluster Advantage

Break Through Application Performance
No mater how good low level benchmarks perform, the combined performance of a cluster is what ultimately determines its viability as a high performance computing system. In order to test the scalability and overall effectiveness of a true commodity cluster, eight Pentium D nodes were configured into a test cluster. Each node had one Pentium D 940 running at 3.2 GHz, 8 GBytes of RAM, and was connected to an 8 port Gigabit Ethernet switch (SMC 8505T).

The NAS Parallel Benchmarks (NPB Version 2.3) are good overall tests of cluster scalability. These tests are a small set of programs designed to help evaluate the performance of parallel supercomputers. The benchmarks, which are derived from computational fluid dynamics (CFD) applications, consist of five kernels and three pseudoapplications. See the references at the end of this article for more information on the specific programs. Table 3 illustrates the Linux cluster scaling capability of the Class B test suite. Scaling is defined as speed of Ncores/speed of one core.

The results using four and eight processors are quite good considering the cluster is using a low cost Gigabit Ethernet interconnect. (Test IS is known not to scale well with Ethernet.) Given the speed of the Pentium D, it is quite remarkable that almost all the tests show acceptable scaling at four and/or eight processors. Perhaps the most interesting result is the extra performance gained by using all 16 cores for some of the tests. From the table above, EP, FT, and LU were able to gain additional performance by utilizing the extra core on each processor.

In the case of the LU benchmark, 8 processors using 16 cores delivered 6.3 GFLOPS of combined performance which was an increase of 2.5 GFLOPS over the 8 processor single core result.

Those tests that did not scale well using the extra cores (e.g., CG, BT) may require additional network throughput to accommodate the extra eight parallel tasks.

Scaling GROMACS
In addition to single processor performance benchmarks, GROMACS also supports Linux parallel execution using MPI (Message Passing Interface). Table Four provides the performance and scaling results for the DPPC Phospholipid membrane benchmark.

These results are important for two reasons. First, the use of commodity hardware, including low cost Gigabit Ethernet, is still a viable method to achieve supercomputer level performance at low cost. Second, the presence of an additional core in the Pentium D, at essentially no extra cost, pushed the performance level to that of much larger and more expensive solutions.

For the GROMACS benchmark, the 8-way scaling is excellent; and gaining (for free) an additional 3.2 GFLOPS from the second cores using Gigabit Ethernet is a huge win for this class of cluster.

Price-to-Performance Data
When evaluating high performance computing systems, it's important to keep in mind the price-to-performance of the components. If a Pentium D 940 cluster and an Opteron 270 cluster were purchased, many of the components would be identical (i.e., HyperBlade, hard disk, etc.). The price differences would exist in the processor, motherboard, and memory. If these prices are taken into account using current online market pricing, the Opteron node would cost approximately 1.4 times more than the Pentium D node.

If the price ratio data are combined with the GROMACS performance data, then the price-to-performance of the Opteron 270 solution is almost double that of the Pentium D 940 solution.

The price and performance differences are shown in Table Five. The GROMACS performance is an average of the data in Table Two. It should be noted that the scaling behavior of a similar Opteron cluster node was not available for this benchmark. Assumptions about scalability should be always be tested.

Recommendations
Based on the above analysis, the following recommendations will help maximize price-to-performance of an HPC cluster:

1)  The Pentium D platform offers substantial price-to-performance advantages over more traditional server based systems and should be considered for HPC clusters. In addition, the use of Gigabit Ethernet has also been shown to produce exceptional results for this range of testing.
2)  Combining optimized commodity hardware with leading edge HyperNode packaging from Appro International will allow complete integration of the high density Caretta motherboards into the cluster environment.
3)  The need for benchmarking should be emphasized when evaluating a cluster architecture. As shown in this article, proper benchmarking can often test assumptions about hardware and provide an optimum price performance for your HPC needs. Any choice of cluster hardware should be made on the basis of a benchmark analysis.
4)  The use of dual core processors can be a huge win for certain codes. In this study we did not test the effect of running different MPI programs on separate cores on the same processor. We expect - due to different memory access patters and network usage patters - that this methodology should also see similar price-to-performance gains.

Testing Methodology
Tests were conducted using eight dual core Intel Pentium D (model 940) Presler servers operating at 3.2 GHz. Each server used a Nobhill motherboard (Intel Model SE7230NH1) which is functionally equivalent to the Caretta motherboard, but larger in size. Each node had 8GB of DDR2 RAM and two Gigabit Ethernet ports (only one of which was used for the testing). A SMC 8508T Ethernet switch was used to connect the servers. Ethernet drivers were also set to provide best performance for a given test. In addition, where appropriate, the MPI tests were run with "sysv" flag to cause the same processor cores to communicate through memory. Contact the author for details whose information can be found in the references of this article.

The software environment was as follows:

  • Base OS: Linux Fedora Core 4 (kernel 2.6.11)
  • Cluster Distribution: Basement Supercomputing Baseline Suite
  • Compilers: gcc and gfortran version 4.0.2
  • MPI: LAM/MPI version 7.1.1
References
  • Appro International: www.appro.com.
  • Benchmark and Author Contact Information: www.basementsupercomputing.com/content/view/23/48/, and [email protected].
  • GROMAC S: www.gromacs.org.
  • Basement Supercomputing Baseline Cluster Suite: www.basementsupercomputing.com.
  • NAS Parallel Benchmark: www.nas.nasa.gov/Resources/Software/npb.html.
  • More Stories By Douglas Eadline

    Dr. Douglas Eadline has over 25 years of experience in high-performance computing. You can contact him through Basement Supercomputing (http://basement-supercomputing.com).

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    IoT & Smart Cities Stories
    At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
    Every organization is facing their own Digital Transformation as they attempt to stay ahead of the competition, or worse, just keep up. Each new opportunity, whether embracing machine learning, IoT, or a cloud migration, seems to bring new development, deployment, and management models. The results are more diverse and federated computing models than any time in our history.
    At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
    Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
    Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
    IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
    The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
    Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
    Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
    Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...