Welcome!

Open Source Cloud Authors: Zakia Bouachraoui, Liz McMillan, Elizabeth White, Pat Romanski, Yeshim Deniz

Related Topics: Java IoT, Microservices Expo, Open Source Cloud, Machine Learning , Release Management , Python

Java IoT: Article

Profiling Python Performance Using lineprof, statprof, and cProfile

Let’s look at profiling in Python

If you’re a regular here, you know how much we care about the full-stack view of applications. Today, let’s zoom in a bit, and talk about the performance of a single layer. In particular, let’s look at profiling in Python.

As an example, let’s take this bit of code for calculating confidence intervals of the mean of a set of data. If you’re not familiar with confidence intervals, they provide a set of bounds for a given statistic; a 95% interval implies that the true mean lies in the calculated range 95% of the time. One way of calculating this involves generating a number of new data sets from the data you have (random selection, with replacement), and looking at that (meta?-)data set. In practice, you typically run this a couple of times, look at the results, and see if they’re converging. If not, you keep running until they do converge, or you hit some max number of iterations.

It turns out, doing 1000 iterations of a reasonable data set size (say, 100k points) is a reasonably expensive operation — about 2 seconds (ish) on my laptop. That’s not too bad in isolation, but run every 15 minutes, on a variety of data sizes up to 1 million, over all customers here at AppNeta? We can do much better.

Since this is Python, let’s see if there’s an obvious way to do it. As it turns out, there is: the standard library module cProfile. This is Python’s flexible, deterministic, C-implemented profiler. (There’s also profile, a pure-python version, but unless you have good cause to avoid C extensions, cProfile is probably a better bet.) cProfile will give us a list of all function calls, sorted however we want. Let’s run it, and dump the result to a file:

ProfilePython-1

There’s a few important things to take note of in these results. The first thing to notice is that 3 of the top 4 functions are in the random module. We only use the random module twice in our example, and the initial use (generating the data set) is a constant 100,000 calls. That leaves 4 million calls from the inner loop of confidence_interval. Since that’s a core part of the algorithm, I wonder if there’s a better way to do that?

For a start, let’s try using a different random number generator. As a rule, if you have a slow function in python, and numpy has a replacement function, the numpy version will be faster. Let’s replace random with numpy.random, and try again:

ProfilePython-2

Much better! Unfortunately, now we have a different problem. Instead of a single line or function call, we only know our problem exists inside a function.

One way to drill down on this is to use statprof. This is a different type of profiler: instead of instrumenting each function call, statprof wakes up at pre-defined intervals and records a stack trace, including line number. Let’s run that:

ProfilePython-3

Cool! Looking back at the code, lines 44 and 45 are the selection and array creation:

ProfilePython-4

Interestingly, the absolute time reported by statprof is a bit lower than cProfile. That difference is due to the statistical nature of statprof. Even at the default setting of 1ms samples, it simply has to do less work than cProfile, who has to instrument all 4 million calls to the various parts of random. It’s worth remembering this difference in overhead, as some code will exihibit different behavior as the timing changes.

In any case, can we do better? It’s possible that the time here is no longer being spent on calculations, but instead on simply creating intermediate arrays. Let’s combine those lines, and re-run:

ProfilePython-5

That certainly seems better: 1.92s vs 2.39s (1.54 + 0.85) from above. At this point, though, be careful. Up until now, we’ve avoided directly comparing profiling runs. Looking at the percentage of time spent, we’re actually at about the same place (85% of total); this run just completed faster than the last run. This is a common pattern — as the low-hanging fruit is picked off, it becomes more and more important to validate smaller gains in more formal ways. How, exactly, to collect those stats is a topic for another post.

It’s also worth mentioning Robert Kern’s line_profiler. It’s a deterministic profiler (like cProfile), but it captures profiling data on a line-by-line basis. It also allows selectively profiling specific functions, which can be invaluable on a larger codebase, to weed out the noise from other modules. Unfortunately, because it does capture every line call you ask it to, the overhead of instrumenting every line you run can be substantial (4x slowdown on my laptop). For completeness, here’s what it looks like on our sample:

ProfilePython-6

Finally, remember that profiling is just one component of a performance toolkit. Most performance problems aren’t complex; they’re just well-hidden. Before writing the above code, I hadn’t realized how expensive random number generation could be. Simply taking the time to look for a faster function saved over 90% of the time spent.

Make it work, then make it pretty, then make it fast. And with the right tools, it doesn’t even take that long.

And if you want to find other ways to optimize your code and make it fast, start using TraceView for free - sign up here.

Related Articles

Python and gevent

Tracing Celery Performance For Web Applications

Tracing Python — An API

More Stories By TR Jordan

A veteran of MIT’s Lincoln Labs, TR is a reformed physicist and full-stack hacker – for some limited definition of full stack. After a few years as Software Development Lead with Thermopylae Science and Techology, he left to join Tracelytics as its first engineer. Following Tracelytics merger with AppNeta, TR was tapped to run all of its developer and market evangelism efforts. TR still harbors a not-so-secret love for Matlab-esque graphs and half-baked statistics, as well as elegant and highly-performant code. Read more of his articles at www.appneta.com/blog or visit www.appneta.com.

IoT & Smart Cities Stories
While the focus and objectives of IoT initiatives are many and diverse, they all share a few common attributes, and one of those is the network. Commonly, that network includes the Internet, over which there isn't any real control for performance and availability. Or is there? The current state of the art for Big Data analytics, as applied to network telemetry, offers new opportunities for improving and assuring operational integrity. In his session at @ThingsExpo, Jim Frey, Vice President of S...
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settl...
@CloudEXPO and @ExpoDX, two of the most influential technology events in the world, have hosted hundreds of sponsors and exhibitors since our launch 10 years ago. @CloudEXPO and @ExpoDX New York and Silicon Valley provide a full year of face-to-face marketing opportunities for your company. Each sponsorship and exhibit package comes with pre and post-show marketing programs. By sponsoring and exhibiting in New York and Silicon Valley, you reach a full complement of decision makers and buyers in ...
Two weeks ago (November 3-5), I attended the Cloud Expo Silicon Valley as a speaker, where I presented on the security and privacy due diligence requirements for cloud solutions. Cloud security is a topical issue for every CIO, CISO, and technology buyer. Decision-makers are always looking for insights on how to mitigate the security risks of implementing and using cloud solutions. Based on the presentation topics covered at the conference, as well as the general discussions heard between sessio...
The Internet of Things is clearly many things: data collection and analytics, wearables, Smart Grids and Smart Cities, the Industrial Internet, and more. Cool platforms like Arduino, Raspberry Pi, Intel's Galileo and Edison, and a diverse world of sensors are making the IoT a great toy box for developers in all these areas. In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists discussed what things are the most important, which will have the most profound e...
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
Rodrigo Coutinho is part of OutSystems' founders' team and currently the Head of Product Design. He provides a cross-functional role where he supports Product Management in defining the positioning and direction of the Agile Platform, while at the same time promoting model-based development and new techniques to deliver applications in the cloud.
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
LogRocket helps product teams develop better experiences for users by recording videos of user sessions with logs and network data. It identifies UX problems and reveals the root cause of every bug. LogRocket presents impactful errors on a website, and how to reproduce it. With LogRocket, users can replay problems.
Data Theorem is a leading provider of modern application security. Its core mission is to analyze and secure any modern application anytime, anywhere. The Data Theorem Analyzer Engine continuously scans APIs and mobile applications in search of security flaws and data privacy gaps. Data Theorem products help organizations build safer applications that maximize data security and brand protection. The company has detected more than 300 million application eavesdropping incidents and currently secu...