Scrawls from Preston...

Powered by Pelican.

Wed 13 January 2010

Python,Multiprocessing,Hyperthreading, and image resizing

I have the occasional need to resize a set of images. I used to use Photoshop batch actions, then I used some droplets, and recently I've been using a simple python script with PIL (Python Image Library)

We recently got an 8 core Mac Pro, and I wanted to see if I could take more advantage of all those cores when resizing images.

One of the things that confused me when I first opened activity monitor, is that it had 16 processors listed. A little digging turns up that this relates to Intel's Hyperthreading technology, where they present the OS twice the cores and hand some extra concurrency on the chip. There is some debate out there as to whether or not it makes a difference.

It's frustrating to have all that power and watch a CPU utilization tool look like this:

![Single](https://ptone.com/dablog/wp-content/uploads/2010/01/single.png)

But that is what you get when a tool is not written to be parallel across processing units. Much is made over Python's limit to threading and the GIL, however it seems that multiple threads never take as much advantage of multicore horsepower as do multiple processes.

Thanks to Python's [multiprocessing library](http://docs.python.org/library/multiprocessing.html) it was very easy to create a worker [pool](http://docs.python.org/library/multiprocessing.html#module-multiprocessing.pool) to handle the resizing. The results are impressive. The test task was to resize a folder of 350 jpeg files by 50% and save them to a folder.

![Performance](https://ptone.com/dablog/wp-content/uploads/2010/01/performance.png)

This shows you the performance gain by doing the image resizing in parallel. It goes from a 6 minute task to a 30 second task.

What is interesting is that even though there are only 8 true cores, there is a 40% increase in speed using all 16 virtual cores, but almost no advantage in going any higher than that. I'd say hyperthreading makes a difference in this case.

Now this is more like it:

![Maxed](https://ptone.com/dablog/wp-content/uploads/2010/01/maxed.png)

A couple assorted notes:

With 8 workers, there is a roughly 10% increase in performance seen when turning off hyperthreading (via processor prefpane from dev-tools). It may be that the overhead of managing threads that aren't doing much for you detracts from the overall performance?? Not sure.

Photoshop batch automate on the same task takes about 6 minutes, which seems to refute Adobe's [implication](http://blogs.adobe.com/jnack/2006/12/photoshop_and_multicore.html) that muliprocessing doesn't often gain you much.

There is one gotcha however, and that is that there does seem to be some sort of memory leak with the multiprocessing module. With just one worker in the pool, you can see a steadily increasing memory use that isn't present in the same PIL code that isn't run through the multiprocessing module. This is probably a manifestation of [this bug](http://bugs.python.org/issue6653)

Finally, the script that I used to do the tests is available [here](http://gist.github.com/276618).


https://ptone.com/dablog