What 32-bit x86 Linux's odd 896 MB kernel memory boundary is about
Back in my entry on how the Linux kernel divides up your RAM I described the somewhat odd split the 32-bit x86
Linux kernel uses. In 32-bit x86 kernels, the Normal
zone is only
memory up to 896 MB and the HighMem
zone is everything above it.
The reasons for this are rooted in history and the 32-bit (kernel)
memory map.
In the beginning, Linux only ran on (32-bit) x86 machines and the machines had a small amount of memory by modern standards. This led Linux to take some convenient shortcuts, including how it handled the problem of getting access to physical memory.
The entire Linux kernel has a 1 GB address space (embedded at the top of every process's 4 GB address space; see here and here for more discussion). Since even 512 MB of RAM was an exceptional amount of memory back in the early days, Linux took the simple approach of directly mapping physical RAM into the kernel address space. The kernel reserved 128 MB of address space for itself and for mapping PCI devices and the like, and uses the rest of the 1 GB for RAM; this allowed it to directly map up to 896 MB of memory.
(I don't know why the specific split was chosen. Possibly it was felt that 128 MB was a good round number for the kernel's own usage.)
After a while it became obvious that direct mapping alone wasn't good
enough (partly because of increased PC memory and I think also partly
because Linux was being ported to non-x86 machines that couldn't do
this). On 32-bit x86, the solution was to create a second zone which
would use explicit mappings created on the fly; this is the HighMem
zone. Obviously the new zone starts at the point where RAM can't be
directly mapped any more, ie at 896 MB.
(More technical details and background are in linux-mm.org's HighMem page.)
If the Linux kernel people were redoing these decisions from scratch
I don't know if they'd keep the direct linear mapping of the 32-bit
Normal
zone, or if they'd simplify life by making all 32-bit memory be
HighMem
memory. These days many x86 machines that are still running in
32-bit mode have several GB of memory, so most of their RAM is already
being mapped in and out of the kernel address space.
(To answer an obvious question: if I'm reading the kernel documentation correctly, the 64-bit x86_64 kernel directly maps all 64 TB of possible physical memory into the kernel address space. See Documentation/x86/x86_64/mm.txt. I suspect that this is a pretty safe decision.)
Comments on this page:
|
|