What the unified buffer cache is unifying
Pretty much all Unixes these days have what is called a unified buffer cache. If you don't know something about Unix history, this name is a little puzzling, because what is it unifying?
The simple explanation is that originally Unix had the buffer cache,
which cached blocks of recent disk IO (whether directly from user
processes via read()
and write()
, or from internal kernel IO), and
process virtual memory, used for the code and data of running processes.
The buffer cache was statically sized, generally at 10% of the physical
RAM in the machine; 'ordinary RAM', used for processes, was whatever
you had left after the kernel was done allocating its data structures
(including the buffer cache).
(This split made sense in a world without mmap()
because there was a
clear separation between 'process virtual memory', only directly used by
processes, and 'disk IO buffers', which were only directly used by the
kernel.)
The static sizing of the buffer cache didn't please people any more than simple swap space assignment did, since it was quite possible to have significant amounts of your system's RAM be inactive and thus more or less wasted. Researchers and Unix vendors promptly got to work on unifying the buffer cache and process virtual memory, so that they both shared the same pool of RAM and each could potentially use (nearly) all of it.
(I believe the first vendor to deliver a system with a unified buffer
cache was Sun with SunOS 4, which also introduced mmap()
. You could
say that mmap()
forced Sun's hand on this, since systems with mmap()
more or less erase the clear distinction between process virtual memory
and kernel level disk IO buffers by having some pages that are both.)
|
|