Welcome!

Open Source Cloud Authors: Yeshim Deniz, Liz McMillan, Stackify Blog, Vaibhaw Pandey, Pat Romanski

Blog Feed Post

Under the Hood: How ExtraHop Delivers 20Gbps of Real-Time Transaction Analysis

This post is authored by ExtraHop CEO Jesse Rothstein.

When we talk to IT teams who are considering ExtraHop, there’s often a discussion about scalability. People are skeptical, and rightfully so. Many monitoring vendors sell the dream of real-time, off-the-wire transaction analysis. In reality, they only do so for a subset of traffic and for a relatively small number of concurrent flows, or they write the bulk of the data to huge disk arrays for post-hoc analysis.

We love to talk to people about scalability and performance because it matters. For real-time analysis, if you can’t keep up, you fall behind, and if you fall behind, you might never catch up again. Additionally, greater scalability of real-time monitoring offers IT teams visibility into very large environments in which they previously were flying blind, and it offers a more cost-effective approach with fewer appliances.

20gbps throughput

The EH8000: An All-in-One Operational Intelligence Platform

Our new EH8000 appliance performs real-time, L2-L7 transaction analysis for up to a sustained 20Gbps. Throughput is only part of the picture. A single EH8000 can analyze more than 400,000 transactions per second, extracting application-level health and performance metrics such as URIs associated with HTTP 500 errors, slow stored procedures in a database, or the location of corrupt files in network-attached storage. This level of performance is far beyond what other passive monitoring vendors even advertise let alone what they actually do. For example, our EH8000 performs over an order of magnitude faster than the recently announced TruView appliance from Visual Network Systems, which, according to their own materials, only analyzes one million transactions per minute, or less than 17,000 per second. The ExtraHop platform’s analysis of more than 400,000 transactions per second is a true market leader.

Even with our current lead, I believe that ExtraHop will continue to widen the scalability gap compared to other products on the market. This is a bold claim, so please allow me to explain why.

Reason #1 – ExtraHop was built from the ground up for multi-core processing.

The first reason for ExtraHop’s substantial performance lead—and the reason why I believe ExtraHop will continue to widen the gap—is that our platform was built from the ground up for multi-core processing. Network processing is embarrassingly parallel and can be easily split across multiple cores. Systems that are more parallelized see greater speedup with more cores, according to Amdahl’s Law. The chart below illustrates the effect of Amdahl’s Law, where a program that is 95% parallelized sees a maximum speedup that is five times the maximum speedup of a program that is only 75% parallelized.[1] While other analysis products will see some benefit from multi-core processing, the ExtraHop platform, which is unburdened by legacy architectures and built from the ground up for multi-core processing, will continue to see tremendous benefit.

Source: Wikipedia

Vendors who are working to convert their existing code to run faster on newer multi-core processors face an uphill battle. As a recent Dr. Dobbs report, the State of Parallel Programming 2012, states, “Refactoring existing code is particularly challenging, so the researchers recommend that parallelism be part of the design from the start.” The report goes on to detail the types of concurrency bugs that developers often struggle with when converting existing serial code to parallel code.

Even at ExtraHop, where our software is designed for multi-core processing, we still deal with issues such as lock contention, concurrent access, NUMA (non-uniform memory access) effects, and cache ping-ponging. These are sophisticated problems that can have disastrous consequences if handled poorly, especially in this type of high-performance appliance, and there are relatively few development tools that can help.

Reason #2 – ExtraHop’s Engineering team is committed to performance. 

Writing high-performance code is a rarely practiced art. The majority of software developers work on front-end applications that have relatively forgiving timing constraints. ExtraHop does not have this luxury with real-time packet processing, so we are laser-focused on writing performance-sensitive code. We are constantly profiling our systems to seek out bottlenecks, especially in the packet path. If new code adds a few as 1,000 CPU cycles, we will notice. We also pay close attention to caching effects, both for dedicated per-core and shared on-die caches. This is not to say that other vendors’ engineering teams are not committed to performance, but simply that our focus on performance is one of the reasons why the ExtraHop platform performs real-time transaction analysis at a sustained 20Gbps.

As an aside, if you are a software engineer looking to solve kernel-level, systems-engineering problems and enjoy working with an outstanding team of developers, we’re hiring.

Reason #3 – ExtraHop uses OS bypass for the data plane.

ExtraHop uses a custom Linux distribution for activities on the control plane, such as running the administration UI and configuring the system. For the data plane, ExtraHop uses a proprietary networking microkernel that runs on the metal for the fastest possible performance. Optimizing packet scheduling, performing memory management, and talking directly to I/O devices all help to speed up our packet processing considerably.

In addition to packet processing, another challenge is recording the stream of health and performance metrics to persistent storage. When we were designing the ExtraHop platform, we considered many commercial and open-source databases. We ended up rejecting these options because they would have required continuous management and administrative tuning. Most importantly, these RDBMSes couldn’t handle the level of sustained reads and writes that the ExtraHop platform requires. We also tried pure file-based systems that didn’t scale and investigated less-structured datastores such as Berkeley DB and Tokyo Cabinet. We could have solved this problem by throwing money at it, such as by requiring our users to purchase an expensive SQL cluster, but we wanted to build an all-in-one appliance with a small footprint that required little care and feeding.

To keep our deployment simple and make real-time analysis available to users immediately, we built a proprietary, high-speed, real-time streaming datastore that is optimized for telemetry, or time-sequenced data. This datastore bypasses the operating system to directly read from and write to block devices and uses fast in-memory indexing so that data can be read as soon as it is written, similar to how Google uses Big Table for web indexing.

ExtraHop Platform Architecture

You Are Right to Care About Scalability and Performance

ExtraHop cares as much about performance as you do. It will affect how much value you get from the product, and it also impacts data fidelity. If a load balancer, switch, firewall, or other in-line device is overloaded and drops packets, the sender will simply retransmit them (assuming a reliable transport protocol such as TCP). That doesn’t happen for an out-of-line device that uses a SPAN or network tap. If the device is overloaded, packets will drop, and analysis will suffer.

When choosing a real-time transaction-analysis solution, be sure to question the vendor on scalability. Ask them when their solution was first developed and if it has been redesigned for multi-core chip architectures. If they claim a certain level of throughput, ask them if they can handle high packet rates as well—many monitoring products that do not scale in real-world environments only talk about one end of the performance curve. And, finally, be sure to contact us so we can show you the ExtraHop difference!

 


[1] It’s worthwhile to consider the necessity of parallelization. Since 2005, increases in clock speed have plateaued while transistor counts have continued to grow according to Moore’s Law (see the graph below). During the same period, CPUs have gone from one to two to four to six to eight to sixteen CPU cores, starting with the dual-core Itanium 2 in 2006. To see maximum benefits from new processors, software developers must understand how to parallelize their systems. As experts have noted, this limitation means that the free lunch is over for software developers in regard to benefiting from hardware improvements. As a recent Intel whitepaper put it, “The future of computing is parallel computing, and the future of programming is parallel programming.”

Source: The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software

 

Read the original blog entry...

More Stories By ExtraHop Networks

ExtraHop Networks is a leading provider of network-based application performance management (APM) solutions. The ExtraHop Application Delivery Assurance system performs the fastest and deepest analysis in the industry, achieving real-time transaction monitoring at speeds up to a sustained 10Gbps in a single appliance and application-level visibility with no agents, configuration, or overhead. The ExtraHop system quickly auto-discovers and auto-classifies applications and devices, delivering immediate value out of the box. ExtraHop Networks provides award-winning solutions to companies across a wide range of industries, including ecommerce, communications, and financial services. The privately held company was founded in 2007 by Jesse Rothstein and Raja Mukerji, engineering veterans from F5 Networks and architects of the BIG-IP v9 product. Follow us on Twitter @ExtraHop. For more information, visit www.extrahop.com.

@ThingsExpo Stories
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...
WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, introduced two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a multip...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
An increasing number of companies are creating products that combine data with analytical capabilities. Running interactive queries on Big Data requires complex architectures to store and query data effectively, typically involving data streams, an choosing efficient file format/database and multiple independent systems that are tied together through custom-engineered pipelines. In his session at @BigDataExpo at @ThingsExpo, Tomer Levi, a senior software engineer at Intel’s Advanced Analytics gr...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things’). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing? IoT is not about the devices, it’s about the data consumed and generated. The devices are tools, mechanisms, conduits. In his session at Internet of Things at Cloud Expo | DXWor...