A fun problem: monitoring randomness reduces it
The Linux kernel exports a /proc
entry that reports on how many
bits of entropy are in the kernel's randomness pool,
/proc/sys/kernel/random/entropy_avail
. Recently, Matt Simmons
noticed his server monitoring looking at this
and noticed that when he checked the /proc
entry by hand the
reported entropy went down. It turns out that this was not random
fluctuations and the explanation for it is rather interesting, as
pointed out by @rejectreality on Twitter.
As a security measure, modern versions of Linux randomize various parts of a process's memory space in various ways; this goes by the general name of ASLR (address space layout randomization) and especially affects the stack. In order for this randomness to be really effective it should be unpredictable, which means that it can't be based on any of the usual simple seeds for pseudo-random generators like the time of day the process started or its PID. Instead it turns out that the kernel exports some randomness to user space processes when it execs them.
You can see where this is going. This exported randomness comes
from the kernel's general random number infrastructure and so it
winds up depleting some of the entropy from the entropy pool. So
if you run a program to read out the value of entropy_avail
,
the very act of doing so will deplete some of that entropy. If you
repeatedly run the program you'll repeatedly lower the entropy
(although repeatedly reading the entropy value from within one
program won't do it). Of course this isn't unique to looking at
entropy_avail
; running pretty much any program will deplete
some entropy no matter what it does.
(It turns out that this has apparently led to some problems.
Indeed most of our active servers turn out to have very low
entropy_avail
values. Hopefully people aren't doing much that
required strong randomness.)
|
|