== A nice illustration of the cost of creating and destroying objects in Python Here are two functions ([[they may look familiar IdiomStrangenessII]]): > def xiter(max): > for _ in xrange(max): > pass > > from itertools import repeat > def riter(max): > for _ in repeat(None, max): > pass The first thing that makes these functions interesting is that _xiter()_ runs about 1.25 times as slowly as _riter()_, once you test with a large enough _max_ to drown out startup effects. This is interesting because these functions are almost identical, both in generated bytecode and in what happens at runtime. Both _xrange()_ and _repeat()_ create and return iterator objects, which the _for_ loop then traverses. Both iterator objects are implemented in C, as are _xrange()_ and _repeat()_ themselves, and the other operations are the same between both functions. In short, the difference between these two functions is how the two iterator objects behave. And the big difference here is object allocation and deallocation; _repeat()_'s iterator returns the same pre-existing object _max_ times while _xrange()_'s iterator returns _max_ different integer objects, most of which are created and then destroyed. While it's possible that the C code for one is much worse than the C code for the other, it's reasonable to assume that this 1.25 times performance difference between the two functions is entirely due to the extra overhead of allocating and deallocating all of those integer objects. This is about as pure an illustration of the cost of object creation and deletion as you could ask for. (Note that the performance difference is not due to the overhead of having a lot of integer objects around. Only one integer object is alive at any given time in the _xiter()_ version.)