On the performance of Python longs being used as bitmaps
A comment on the previous entry got me
interested in what exactly cost all the time when using Python longs as
Some poking shows that the time
& takes scales up based on the smaller
long involved, and the time
| takes scales based on the larger long
involved. While this is not particularly surprising, it means that
setting and clearing any bit has a cost based on how big the bitmap is,
ie, what the largest set bit is.
(Checking low bits in a large bitmap is relatively cheap, since one
side of the
& will be small.)
There are two core performance issues with longs as bitmaps: there's no bit addressing, and you can't mutate a long in place so any change to your bitmap requires allocating a new object. The first is the most important, because it means you are making longs left and right and then doing big expensive operations merely to deal with one bit.
While Python could in theory add bit addressing to longs (allowing you to get and set a specific bit), I doubt it will, since I suspect that the need is relatively esoteric and you can probably do about as well with a list of Python integers (or a C level module if you really need speed).
Sidebar: some numbers
Here's a performance chart for
bmap & ~(1L << bit), where
|bit||time (µsec)||doing the <<||doing the ~||doing the &|
The percentages are the relative proportions of the overall time that the left shift, the inversion, and the binary anding take. Timings are from Python 2.4.4 on a 64-bit Fedora Core 6 machine with an AMD Athlon 4600+ CPU.