Wandering Thoughts archives


Keeping up with new Python features

I have a new resolution: every so often, I'm going to read over the current builtins section of the Python documentation.

I've come to this because recently I was reading through a page on Python idioms to see if it had anything new, and stumbled over the mention of an enumerate() builtin, new in Python 2.3. Well, I'm using Python 2.3, and I hadn't remembered enumerate(), and I could have used it recently. Whoops.

I do try to keep up with release notes and other sources of Python news and discussion (eg, Planet Python). But it's easy to forget about smaller things (or only remember them vaguely) in the time between I read about a new bit and when I can use it. Clearly I need to give myself a refresher every so often.

(If I was really ambitious I would periodically scan the entire Python Library Reference, at least reading the one sentence description of all the modules. I don't think I'm that energetic, though.)

python/KeepingUp written at 17:45:28; Add Comment

The other dynamic linking tax

I've already talked about one dynamic linking tax, but here's another one. Presented in illustrated form:

; cat true.c
#include <stdlib.h>
int main(int argc, char **argv)
; cc -o true true.c
; cc -o true-s -static true.c
; diet cc -o true-d true.c
; ls -s true true-d true-s
  8 true    4 true-d  388 true-s
; strace ./true >[2=1] | wc -l
; strace ./true-s >[2=1] | wc -l
; strace ./true-d >[2=1] | wc -l

This is on a Fedora Core 2 machine. On a Fedora Core 4 machine the dynamically linked version makes 22 syscalls and the static linked glibc version makes nine.

strace's output always has the initial execve() that starts the program being traced and we're explicitly calling exit(), so the dietlibc version is doing the minimum number of system calls possible. Everyone else is adding overhead; in the case of dynamic linking, quite a lot.

This makes a difference in execution speed too. The dynamically linked glibc version runs 1.38 to 1.47 times slower than the dietlibc version, and the statically linked version 1.06 times slower. Admittedly this is sort of a micro-benchmark; most real programs do more work than this before exiting.

I ran into this while trying to measure the overhead of a program that I wanted to be as lightweight and fast as feasible. strace turned up rather alarming numbers for the overhead involved in glibc (although I believe it enlarges the overhead of system calls, so I'm not going to cite absolute numbers). So far I am being good and resisting the temptation to static link it with dietlibc.

Sidebar: just what's going on with glibc?

The statically linked glibc version also calls uname() and brk() (twice). The dynamically linked version, well, let's let a table show the story:

calls syscall
5 old_mmap
3 open
2 mprotect
1 munmap
1 read
2 fstat64
1 uname
2 close
1 set_thread_area
1 brk

This table does not count the initial execve() or the final exit_group() (which glibc calls instead of exit()).

(Again, this is on a Fedora Core 2 machine. Your mileage will differ on different glibc versions. On FC4 the static linked glibc version does a uname, 4 brks, and a set_thread_area.)

linux/DynamicLinkingTaxII written at 02:19:14; Add Comment

Page tools: See As Normal.
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.