Welcome!

Open Source Cloud Authors: Pat Romanski, Elizabeth White, Liz McMillan, Yeshim Deniz, Zakia Bouachraoui

Related Topics: @DXWorldExpo, Open Source Cloud, Apache

@DXWorldExpo: Blog Post

NoSQL Integration with the Hadoop Ecosystem By @MapR | @BigDataExpo

How NoSQL and Hadoop can work together to tackle Big Data challenges

Apache Hadoop is an open source Big Data processing platform that comes with its own extensive ecosystem to support various business and technical needs. Hadoop's specialty is large-scale processing and analytics over volumes of data that cannot be efficiently handled by traditional technologies. Hadoop is often complemented by the class of database management technologies referred to as NoSQL, which is also great for large volumes of data, but NoSQL is more about fast reads and writes than about massive processing. NoSQL and Hadoop can work together to tackle big data challenges.

One thing to note up front is that because Hadoop has an associated storage system, it is sometimes mistakenly assumed to be a database management system. It was also sometimes classified as a NoSQL system during its early days, though the NoSQL label is generally accepted today to refer specifically to databases. And while Hadoop is ideal for storing a variety of data types, it is actually about spreading work across many servers in a cluster, which is something that databases were generally not designed to do.

Hadoop Summarized
To describe what Hadoop covers, let's first look at the four primary components of Hadoop:

  • MapReduce: A distributed programming framework that manages the spreading of work across many nodes in a cluster
  • Hadoop Common: A package containing the libraries and utilities to support associated Hadoop modules
  • YARN: A resource management platform (Yet Another Resource Negotiator) for managing computing resources and scheduling tasks
  • HDFS: The Hadoop Distributed File System which manages Hadoop data, and can be substituted with more sophisticated file systems to handle business-critical needs

Each of these components play a role in defining what Hadoop is. Collectively these Hadoop components support the processing of data-intensive distributed applications, enabling them to work in a deployment potentially made up of thousands of nodes and petabytes of data. Each node is an independent computer and is often assigned a subtask by Hadoop that is run in parallel with other nodes to efficiently complete a much bigger task.

MapReduce and Hadoop Common represent the data processing tools that make Hadoop a great platform for big data. MapReduce supports efficient parallel processing, and its function is to ship applications (which will do the processing) to the nodes where the data reside. This enables "data locality" in which nodes perform the processing on the data they store, to minimize excess network traffic that would result from having nodes process data that reside on other nodes in the cluster.

YARN is a relatively new component to Hadoop that helps schedule tasks across the cluster. Known as MapReduce 2.0 (or "MRv2"), it represents a framework that allows you to run new non-MapReduce jobs in your Hadoop cluster in addition to your standard MapReduce jobs.

HDFS provides the storage functionality in Hadoop and splits large files into small blocks (default is 64 MB) and distributes it across the clustered nodes. It ensures data is replicated so that if a node in the cluster fails, data replicas minimize the risk of data loss. In other words, it has the key design goal of overcoming hardware failure, which is critical particularly when low-cost, commodity hardware is used. Another important design goal in Hadoop was to enable swapping out HDFS for another file system. Some Hadoop vendors took advantage of this architecture to provide value-added capabilities beyond those which standard HDFS provides. As an example, MapR Technologies provides MapR-FS which improves the high availability, disaster recovery, and snapshot capabilities over HDFS, while also adding full read/write capabilities, true NFS access, and higher performance.

MapReduce and HDFS are derived from Google's work on MapReduce and the Google File System (GFS). In addition to the above components, Hadoop consists of a number of related projects like Apache Hive, Apache HBase, and Apache Pig. The wide variety of projects in the Hadoop ecosystem gives you the opportunity to select the right tool for specific use cases with specific requirements when processing and analyzing big data.

... And Now NoSQL
NoSQL, on the other hand, is a category of database management systems, but differs from the likes of Oracle, DB2, MySQL, and other relational database management systems (RDBMS), and so are often described as "non-relational." This means they don't rely on the relational model, in which data is stored in tabular form with consistent rows and columns. Instead, they are more free-form in structure to accommodate varying and changing data types.

NoSQL databases provide a fast and efficient mechanism for the storage and retrieval of data, promoting goals such as simplicity, horizontal scaling capability, and better availability. NoSQL databases are used most often in big data and analytic applications, particularly ones in which fast data access is more important than large-scale, parallel processing.

How Can Hadoop and NoSQL Work Together?
While Hadoop and NoSQL do not have exactly the same functions, they are both related to solving big data problems. The Hadoop framework is used most commonly for processing huge amounts of data, and NoSQL is designed for fast, efficient storage and retrieval of large volumes of data. Considering that many early deployments of Hadoop entailed integrations with RDBMSs, integrating Hadoop with NoSQL was a logical next step.

In many cases, data sets processed in the Hadoop system are originally created and stored in a NoSQL database. Whenever you have interactive applications that create new data, you should consider how you can analyze that data to derive important business insights. For example, you might use NoSQL to store and deliver messages between end users, and then use Hadoop to scan the aggregate collection of messages for sentiment analysis. Tools like Apache Sqoop, database-specific connectors, or third-party data integration products let you copy data from a NoSQL system into Hadoop for the large-scale processing.

There are also independent use cases which may not require the support of both platforms. For example, if it is only necessary to perform parallel processing of simple log data, and then store it in HDFS, then Hadoop alone may be sufficient. Similarly, if the only required function in a given use case is to store and then retrieve data such as web application session state, a NoSQL database will be sufficient. But these "standalone" use cases might be short-lived, as enterprises will continue to find more ways to leverage seemingly low value data into important business insights.

An emerging architecture for NoSQL and Hadoop integration that's worth considering entails the "in-Hadoop" databases that are built specifically to run within the Hadoop framework. Examples include Apache HBase, Apache Accumulo, as well as the MapR-DB In-Hadoop NoSQL database, which was architected for business-critical production deployments. With the combined advantages of both Hadoop's processing framework and NoSQL's fast data access, but without the overhead of moving data from one cluster to another, the Enterprise Database Edition of the MapR Distribution including Hadoop supports high performance, extreme scalability, high availability, snapshots, disaster recovery, integrated security, and more. The best of both technologies make this an ideal environment for big data solutions.

To learn more about how you can optimize your enterprise architecture, down this free whitepaper: Optimize Your Enterprise Architecture with Hadoop and NoSQL.

More Stories By Dale Kim

Dale is Director of Industry Solutions at MapR. His technical and managerial experience includes work with relational databases, as well as non-relational data in the areas of search, content management, and NoSQL. Dale holds an MBA from Santa Clara University, and a BA in Computer Science from the UC Berkeley.

IoT & Smart Cities Stories
All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by ...
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Cell networks have the advantage of long-range communications, reaching an estimated 90% of the world. But cell networks such as 2G, 3G and LTE consume lots of power and were designed for connecting people. They are not optimized for low- or battery-powered devices or for IoT applications with infrequently transmitted data. Cell IoT modules that support narrow-band IoT and 4G cell networks will enable cell connectivity, device management, and app enablement for low-power wide-area network IoT. B...
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...