What 32-bit x86 Linux's odd 896 MB kernel memory boundary is about
July 27, 2012
Back in my entry on how the Linux kernel divides up your RAM I described the somewhat odd split the 32-bit x86
Linux kernel uses. In 32-bit x86 kernels, the
In the beginning, Linux only ran on (32-bit) x86 machines and the machines had a small amount of memory by modern standards. This led Linux to take some convenient shortcuts, including how it handled the problem of getting access to physical memory.
The entire Linux kernel has a 1 GB address space (embedded at the top of every process's 4 GB address space; see here and here for more discussion). Since even 512 MB of RAM was an exceptional amount of memory back in the early days, Linux took the simple approach of directly mapping physical RAM into the kernel address space. The kernel reserved 128 MB of address space for itself and for mapping PCI devices and the like, and uses the rest of the 1 GB for RAM; this allowed it to directly map up to 896 MB of memory.
(I don't know why the specific split was chosen. Possibly it was felt that 128 MB was a good round number for the kernel's own usage.)
After a while it became obvious that direct mapping alone wasn't good
enough (partly because of increased PC memory and I think also partly
because Linux was being ported to non-x86 machines that couldn't do
this). On 32-bit x86, the solution was to create a second zone which
would use explicit mappings created on the fly; this is the
(More technical details and background are in linux-mm.org's HighMem page.)
If the Linux kernel people were redoing these decisions from scratch
I don't know if they'd keep the direct linear mapping of the 32-bit
(To answer an obvious question: if I'm reading the kernel documentation correctly, the 64-bit x86_64 kernel directly maps all 64 TB of possible physical memory into the kernel address space. See Documentation/x86/x86_64/mm.txt. I suspect that this is a pretty safe decision.)
* * *
Atom feeds are available; see the bottom of most pages.