== Notes on what Linux's _/proc//smaps_ fields mean Because I was just digging around in the kernel source to determine this (it's a long story), here is some notes about what the fields of the _smaps_ file mean and how they're calculated. The factory for this particular sausage is ``fs/proc/task_mmu.c'' (at least as of the current git tree). For each VMA mapping that gets listed in _smaps_, the kernel walks all of the PTEs associated with it and looks at all of the known pages. Each PTE is then counted up: * the full PTE size is counted as Rss. * if the page has been used recently, it's added to Referenced. * if the page is mapped in only one process it is labeled as private; its full size is added to Pss. * if the page is mapped in more than one process it is shared and the amount it adds to Pss is divided by the number of processes that have it mapped. (If the PTE is for something in swap it only adds to the Swap size.) Note that a 'private' page is not quite as private as you might think. Because processes map pages independently of each other, it's possible to have a shared page that is currently mapped only by a single process (eg only one process may have called an obscure libc function recently); such pages are counted in 'private'. The Size of a mapping is how much address space it covers. If the mapping has been locked into memory via _mlock()_ or the like, Locked is the same as Pss (ie, it is this process's fair share of the amount of locked memory for this mapping); otherwise it is 0 kB. Given that looking at _smaps_ requires walking the pages of all of the VMAs, I suspect that it's a reasonably costly operation. It'd probably be a bad idea to build a tool that did it a lot, especially if the tool scanned all processes in the system. ([[Smem http://www.selenic.com/smem/]] uses _smaps_, but it doesn't normally run repeatedly in the way that, say, _top_ does.)