2015-07-15
Eating memory in Python the easy way
As a system administrator, every so often I need to put a machine under the stress of having a lot of its memory used. Sometimes this is for testing how things respond to this before it happens during live usage; sometimes this is because putting a system under memory stress can cause it to do important things it doesn't otherwise do (such as reclaim extra memory). The traditional way to do this is with a memory eater program, something that just allocates a controlled amount of memory and then (usually) puts actual data in it.
(If you merely allocate memory but don't use it, many systems don't consider themselves to be under memory stress. Generally you have to make them use up actual RAM.)
In the old days, memory eater programs tended to be one-off things
written in C; you'd malloc()
some amount of memory then carefully
write data into it to force the system to give you RAM. People who
needed this regularly might keep around a somewhat more general
program for it. As it turns out, these days
I don't need to go to all of that work because interactive Python
will do just fine:
$ /usr/bin/amd64/python2.6 [...] >>> GB = 1024*1024*1024 >>> a = "a" * (10 * GB)
Voila, 10 GB eaten. Doing this interactively gives me great flexibility; for instance, I can easily eat memory in smaller chunks, say 1 GB at a time, so that I have more control over exactly when the system gets pushed hard (instead of perhaps throwing it well out of its comfort zone all at once).
There are some minor quibbles you can make here; for example I'm not using only exactly 10 GB of memory, since Python has some small overhead for objects and so on. And you probably want to specifically use bytestrings in Python 3, not the default Unicode strings.
In practice I don't care about the quibbles because this is close enough for me and it's really convenient (and flexible), far more so than writing a C program or re-finding the last one I wrote for this.
(If CPython allocates much additional internal memory to create this 10 GB string, it's not enough to be visible on the scale of GBytes of RAM usage. I tried a smaller test and didn't see more than perhaps a megabyte or two of surprising memory usage, but in general if you need really fine control over memory eating you're not going to want to use Python for it.)
PS: It makes me unreasonably happy to able to use Python interactively for things like this, especially when they're things I might have had to write a C program for in the past. It's just so neat to be able to just type this stuff out on the fly, whether it's eating memory or testing UDP behavior.
Mdb is so close to being a great tool for introspecting the kernel
The mdb debugger is the standard debugger on Solaris and Illumos
systems (including OmniOS). One very important aspect of mdb
is
that it has a lot of support for kernel 'debugging', which for
ordinary people actually means 'getting detailed status information
out of the kernel'. For instance, if you want to know a great deal
about where your kernel memory is going you're going to want the
'::kmastat
' mdb command.
Mdb is capable of some very powerful tricks
because it lets you compose its commands together in 'pipelines'.
Mdb has a large selection of things to report information (like
the aforementioned ::kmastat
) and things to let you build your
own pipelines (eg walkers and ::print
). All of this is great,
and far better than what most other systems have.
Where mdb sadly falls down is that this is all it has; it has no
scripting or programming language. This puts an unfortunate hard
upper bound on what you can extract from the kernel via mdb
without
a huge amount of post-processing on your part. For instance, as far
as I know a pipeline can't have conditions or filtering so that you
further process only selected items that one stage of a pipeline
produces. In the case of listing file locks,
you're out of luck if you want to work on only selected files instead
of all of them.
I understand (I think) where this limitation comes from. Part of
it is probably simply the era mdb
was written in (which was not
yet a time when people shoved extension languages into everything
that moved), and part of it is likely that the code of mdb
is
also much of the code of the embedded kernel debugger kmdb
. But
from my perspective it's also a big missed opportunity. A mdb
with scripting would let you filter pipelines and write your own
powerful information dumping and object traversal commands,
significantly extending the scope of what you could conveniently
extract from the kernel. And the presence of pipelines in mdb
show that its creators were quite aware of the power of flexibly
processing and recombining things in a debugger.
(Custom scripting also has obvious uses for debugging user level programs, where a complex program may be full of its own idioms and data structures that cry out for the equivalent of kernel dcmds and walkers.)
PS: Technically you can extend mdb
by writing new mdb modules in
C, since they're just .so
s that are loaded dynamically; there's
even a more or less documented module API. In practice my reaction
is 'good luck with that'.