Welcome!

Open Source Cloud Authors: Zakia Bouachraoui, Yeshim Deniz, Elizabeth White, Liz McMillan, Pat Romanski

Related Topics: Open Source Cloud

Open Source Cloud: Article

64-Bit Linux Applications Need High-Quality 64-bit Database Connectivity

Not choosing the right 64-bit database connectivity can cheat businesses out of the full benefit of 64-bit Linux

Some businesses using Linux as an internal server platform may only now be confronting the challenge of migrating to 64-bit Linux distributions but are actually stepping into familiar territory for most Linux users in the business world. 64-bit Linux has been running for years on chipset families such as Intel's EM64T (Extended Memory 64 Technology) and Itanium, AMD's Athlon 64 and Opteron, and IBM's POWER. In addition, 64-bit Linux distributions have been offered for some time from top vendors such as Red Hat and Novell/SuSE, and have been available as a server operating system choice from hardware vendors such as Dell, IBM, and HP.

With all of the availability and access to 64-bit Linux around, why hasn't business been more aggressive in purchasing and deploying 64-bit Linux server platforms? What initially made many businesses hesitant to embrace 64-bit Linux were general concerns about application migration. What are the costs of migrating our existing 32-bit applications to 64-bit? What degree of benefit would our 32-bit applications experience by being migrated to 64-bit? As time has passed, new processor architectures such as x86-64 made 64-bit Linux more attractive to IT organizations of all shapes and sizes. x86-64 architecture allows both 64-bit and existing 32-bit applications to run simultaneously on a 64-bit operating system platform. Because of this, IT organizations now have much greater flexibility to decide which applications to migrate to 64-bit and when to migrate sparing them the business disruption and expense of a wholesale overhaul of all 32-bit applications in one enormous project.

The question of which applications would benefit the most from migrating to 64-bit is a subjective one, but in general, the answer is memory-hungry, data-intensive, multi-user applications such as relational database management systems (RDBMSs), business intelligence (BI), and data warehousing applications. When run as 32-bit, these applications can easily hit the upper limit of addressable memory that's capable of being accessed by any 32-bit application even when there's more physical memory available to the operating system itself. The result of hitting this upper limit is an increase in the amount of paging to disk that must take place for the application to accomplish its tasks. This increase in disk I/O has the predictable effect of producing a performance bottleneck in the application that limits the ability of the application to scale for larger data sets or numbers of concurrent users. By running as 64-bit, these same applications can access all of the addressable memory available to the operating system and can therefore run entirely in memory if the Linux platform has sufficient RAM. The application's performance and support for additional concurrent users can then scale with the total amount of memory available to the operating system thus eliminating the performance and capacity bottleneck introduced by running as 32-bit.

Surprisingly for most of you reading this article, there is another key question that businesses of all kinds must consider when developing a strategy for developing or migrating applications to 64-bit on Linux. And it's the X factor in this scenario: "What impact will database connectivity have on the success of my 64-bit Linux applications?"

The Importance of Database Connectivity
Most people that I meet and ask about their database connectivity strategy usually tell me the same thing in one of three ways:

  1. "All database connectivity is pretty much the same."
  2. "Database connectivity is a commodity now."
  3. "My database connectivity is 'good enough."
What's not said, but implied is the obvious, "I don't think I need a database connectivity strategy." The myth that each of these statements is built on is the idea that a database connectivity solution from one vendor is architecturally identical to what's offered by a different vendor. What this myth fails to take into account is the fact that there are very different approaches to designing and developing database connectivity. The architecture chosen for a given set of database connectivity components can mean the difference between an end result of poor quality and an end result of high quality.

Most business applications that run on Linux handle database access to a relational data source through some sort of data access API such as ODBC, JDBC, or even some of the proprietary APIs available from the database vendors. Even if these applications weren't written using one of these APIs directly, it's a good bet that they make use of these APIs and subsequently load a driver or some type of database client libraries under the covers. Whether your business already has Linux applications compiled as 64-bit or is implementing a plan for migrating applications to 64-bit, you will want to carefully consider the type of database connectivity that's used. While there are an increasing number of options to choose from, picking high-quality 64-bit database connectivity can make a difference in determining whether your 64-bit Linux applications actually experience the kinds of performance benefits available to 64-bit applications or suffer from a range of potential performance and scalability limitations.

After investing a lot of time and effort in planning what applications to migrate to 64-bit then actually doing the work to migrate the applications, it seems baffling why some architects and developers wouldn't take the threat of introducing performance and scalability bottlenecks into an important application more seriously. Perhaps it's because they don't know what characteristics to look for when choosing database connectivity and the impact that these characteristics can have on the success or failure of their 64-bit Linux applications.

CPU-Bound, Not I/O Bound
As mentioned earlier, memory-intensive applications running as 64-bit can benefit from gaining access to more addressable memory space and thus can potentially access all of the memory available to the operating system. This can allow the application to run entirely in memory, which eliminates the performance and scalability bottlenecks introduced by the increased (and unnecessary) need to page to disk. For important 64-bit applications that run on Linux, businesses typically invest in server hardware with multiple CPUs and very large amounts of RAM to ensure that the application can run entirely in memory and support an increase in data set size, volume, or number of users. This investment works if the application is using database connectivity that is also designed to take advantage of increased numbers of CPUs or increased amounts of memory.

Along with disk I/O, network I/O is one of the two biggest factors impacting application performance. In the typical client/server deployment scenario, the application and database connectivity components run on one tier and connect across the network to an RDBMS located on a second tier. Poor-quality database connectivity components communicate with the RDBMS inefficiently due to an unnecessary amount of network I/O. These kinds of database connectivity components are said to be I/O-bound since they result in increased network I/O, which in turn introduces a performance and scalability bottleneck that can impact applications of any type. Most businesses can't upgrade the speed of their corporate network or WAN environments to compensate for the performance penalty paid for excessive network I/O, so the performance cost of repeatedly unnecessary TCP/IP round-trips adds up quickly for even a simple application operation. As a result, even though more RAM or CPUs can be added to the Linux application server platform, no additional performance or scalability benefit will be observed in the application.


More Stories By Mike Frost

Mike Frost is a product manager for DataDirect Technologies, a provider of standards-based components for connecting applications to data, and an operating unit of Progress Software Corporation. In his role, Mike is involved in the strategic marketing efforts for the company’s connectivity technologies. He has more than 6 years of experience working with enterprise data connectivity and is currently involved in the development of data connectivity components including ODBC, JDBC, ADO.NET, and XML.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...