Welcome!

Open Source Cloud Authors: Elizabeth White, Liz McMillan, Yeshim Deniz, Zakia Bouachraoui, Pat Romanski

Related Topics: Open Source Cloud

Open Source Cloud: Article

64-Bit Linux Applications Need High-Quality 64-bit Database Connectivity

Not choosing the right 64-bit database connectivity can cheat businesses out of the full benefit of 64-bit Linux

Some businesses using Linux as an internal server platform may only now be confronting the challenge of migrating to 64-bit Linux distributions but are actually stepping into familiar territory for most Linux users in the business world. 64-bit Linux has been running for years on chipset families such as Intel's EM64T (Extended Memory 64 Technology) and Itanium, AMD's Athlon 64 and Opteron, and IBM's POWER. In addition, 64-bit Linux distributions have been offered for some time from top vendors such as Red Hat and Novell/SuSE, and have been available as a server operating system choice from hardware vendors such as Dell, IBM, and HP.

With all of the availability and access to 64-bit Linux around, why hasn't business been more aggressive in purchasing and deploying 64-bit Linux server platforms? What initially made many businesses hesitant to embrace 64-bit Linux were general concerns about application migration. What are the costs of migrating our existing 32-bit applications to 64-bit? What degree of benefit would our 32-bit applications experience by being migrated to 64-bit? As time has passed, new processor architectures such as x86-64 made 64-bit Linux more attractive to IT organizations of all shapes and sizes. x86-64 architecture allows both 64-bit and existing 32-bit applications to run simultaneously on a 64-bit operating system platform. Because of this, IT organizations now have much greater flexibility to decide which applications to migrate to 64-bit and when to migrate sparing them the business disruption and expense of a wholesale overhaul of all 32-bit applications in one enormous project.

The question of which applications would benefit the most from migrating to 64-bit is a subjective one, but in general, the answer is memory-hungry, data-intensive, multi-user applications such as relational database management systems (RDBMSs), business intelligence (BI), and data warehousing applications. When run as 32-bit, these applications can easily hit the upper limit of addressable memory that's capable of being accessed by any 32-bit application even when there's more physical memory available to the operating system itself. The result of hitting this upper limit is an increase in the amount of paging to disk that must take place for the application to accomplish its tasks. This increase in disk I/O has the predictable effect of producing a performance bottleneck in the application that limits the ability of the application to scale for larger data sets or numbers of concurrent users. By running as 64-bit, these same applications can access all of the addressable memory available to the operating system and can therefore run entirely in memory if the Linux platform has sufficient RAM. The application's performance and support for additional concurrent users can then scale with the total amount of memory available to the operating system thus eliminating the performance and capacity bottleneck introduced by running as 32-bit.

Surprisingly for most of you reading this article, there is another key question that businesses of all kinds must consider when developing a strategy for developing or migrating applications to 64-bit on Linux. And it's the X factor in this scenario: "What impact will database connectivity have on the success of my 64-bit Linux applications?"

The Importance of Database Connectivity
Most people that I meet and ask about their database connectivity strategy usually tell me the same thing in one of three ways:

  1. "All database connectivity is pretty much the same."
  2. "Database connectivity is a commodity now."
  3. "My database connectivity is 'good enough."
What's not said, but implied is the obvious, "I don't think I need a database connectivity strategy." The myth that each of these statements is built on is the idea that a database connectivity solution from one vendor is architecturally identical to what's offered by a different vendor. What this myth fails to take into account is the fact that there are very different approaches to designing and developing database connectivity. The architecture chosen for a given set of database connectivity components can mean the difference between an end result of poor quality and an end result of high quality.

Most business applications that run on Linux handle database access to a relational data source through some sort of data access API such as ODBC, JDBC, or even some of the proprietary APIs available from the database vendors. Even if these applications weren't written using one of these APIs directly, it's a good bet that they make use of these APIs and subsequently load a driver or some type of database client libraries under the covers. Whether your business already has Linux applications compiled as 64-bit or is implementing a plan for migrating applications to 64-bit, you will want to carefully consider the type of database connectivity that's used. While there are an increasing number of options to choose from, picking high-quality 64-bit database connectivity can make a difference in determining whether your 64-bit Linux applications actually experience the kinds of performance benefits available to 64-bit applications or suffer from a range of potential performance and scalability limitations.

After investing a lot of time and effort in planning what applications to migrate to 64-bit then actually doing the work to migrate the applications, it seems baffling why some architects and developers wouldn't take the threat of introducing performance and scalability bottlenecks into an important application more seriously. Perhaps it's because they don't know what characteristics to look for when choosing database connectivity and the impact that these characteristics can have on the success or failure of their 64-bit Linux applications.

CPU-Bound, Not I/O Bound
As mentioned earlier, memory-intensive applications running as 64-bit can benefit from gaining access to more addressable memory space and thus can potentially access all of the memory available to the operating system. This can allow the application to run entirely in memory, which eliminates the performance and scalability bottlenecks introduced by the increased (and unnecessary) need to page to disk. For important 64-bit applications that run on Linux, businesses typically invest in server hardware with multiple CPUs and very large amounts of RAM to ensure that the application can run entirely in memory and support an increase in data set size, volume, or number of users. This investment works if the application is using database connectivity that is also designed to take advantage of increased numbers of CPUs or increased amounts of memory.

Along with disk I/O, network I/O is one of the two biggest factors impacting application performance. In the typical client/server deployment scenario, the application and database connectivity components run on one tier and connect across the network to an RDBMS located on a second tier. Poor-quality database connectivity components communicate with the RDBMS inefficiently due to an unnecessary amount of network I/O. These kinds of database connectivity components are said to be I/O-bound since they result in increased network I/O, which in turn introduces a performance and scalability bottleneck that can impact applications of any type. Most businesses can't upgrade the speed of their corporate network or WAN environments to compensate for the performance penalty paid for excessive network I/O, so the performance cost of repeatedly unnecessary TCP/IP round-trips adds up quickly for even a simple application operation. As a result, even though more RAM or CPUs can be added to the Linux application server platform, no additional performance or scalability benefit will be observed in the application.


More Stories By Mike Frost

Mike Frost is a product manager for DataDirect Technologies, a provider of standards-based components for connecting applications to data, and an operating unit of Progress Software Corporation. In his role, Mike is involved in the strategic marketing efforts for the company’s connectivity technologies. He has more than 6 years of experience working with enterprise data connectivity and is currently involved in the development of data connectivity components including ODBC, JDBC, ADO.NET, and XML.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by ...
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Cell networks have the advantage of long-range communications, reaching an estimated 90% of the world. But cell networks such as 2G, 3G and LTE consume lots of power and were designed for connecting people. They are not optimized for low- or battery-powered devices or for IoT applications with infrequently transmitted data. Cell IoT modules that support narrow-band IoT and 4G cell networks will enable cell connectivity, device management, and app enablement for low-power wide-area network IoT. B...
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...