Periodically I get back to profiling code, or at least some of it. Profiling is an tricky art: I do not claim expertise but it is something that I come back to every so often.
In a previous project I used strace to follow the ins and outs of a very long running process (about 8 hours) of Python wrapped around C++ before a set of Perl scripts aligned the time and graphed the output. The overhead of strace on the underlying processes was a little heavy but it served its purpose.
In a more recent project, memory profiler was used to measure the memory usage in a long running process. From this, we observed the same results as Evan Jones in his Improving Python’s Memory Allocator paper and excellent note on Python’s memory usage in the source code for lists and other objects.
Currently I am working on some Linked Data code and got interested in the memory usage of the scripts. I used the memory profiler to look at how the overall script’s memory was doing and used it to find a couple of less than efficient uses of memory in a back-end script that is being designed on its own and create new graphs. Using mprof run /path/to/script
followed by mprof plot
is easier to remember, although it can be used as a decorator.
As the script calls remote services, I became more interested in the timings and used Python’s time() module to get the time for the module. I did have a look at the timeit() module but could not see a way of using it as a wrapper around a specific module.
My hunts brought me to the cProfile() module which provides not only the overall time for the complete run but a break down of each part, such as method or class, and its time. For now the times are small and the code is in early stages but this is clearly a very useful module to track down some of the accesses. As part of the reading on this, I came across Julien Danjou’s blog post also mentioning KCacheGrind and visualising the tree. (It reminds me of tracing C and C++ dependencies using the Debian tool, debtree).
That I can simply run it off the command line: python -m cprofile /path/to/script
is a definite winner for my purposes so that I can chain it with memory profiler.
The goal is not to try and run the optimisations yet but to keep these in mind whilst developing the initial code. Part of me does wonder if it is better to have a set of scripts that can run as part of the tests, rather than waiting until the end of the development cycle. Whilst I am clearly not a profiling expert, I am glad that there are these tools around to help look under the bonnet and discover what is really going on to so that I can find and correct any serious errors early on.
No Comments