Security capabilities and reading process memory
Today, reading Google's Project Zero explanation of an interesting IE sandbox escape taught me something that I probably should have known already but hadn't thought about and realized. So let's start at the beginning, with capability-based security. You can read about it in more detail in that entry, but the very short version is that capabilities are tokens that give you access rights and generally are subject to no further security checks. If you've got the token, you have access to whatever it is in whatever way the token authorizes. Modern Unix file descriptors are a form of capabilities; for example, a privileged program can open a file that a less privileged one doesn't have access to and pass the file descriptor to it to give access to that file. The less privileged program only has as much access to the file (read, write, both, etc) as is encoded in the file descriptor; if the file descriptor is read only, that's it.
When you design a capability system, you have to answer some questions. First, do programs hold capabilities themselves in their own memory space (for example, as unforgeable or unguessable blobs of cryptographic data) or do they merely hold handles that reference them and the kernel has the real capability? Unix file descriptors are an example of the second option, as the file descriptor number at user level is just a reference to the real kernel information. Second, are the capabilities or handles held by the process bound to the process or are they process independent? Unix file descriptors are bound to processes and so your FD 10 is not my FD 10.
My impression is that many strong capability based systems answer that user processes hold as much of the capability token as possible (up to 'all of it') and that the token is not bound to the process. This is an attractive idea from the system design perspective because it means that the kernel doesn't have to have much involvement in storing or passing around capability tokens. Again, contrast this with Unix, where the kernel has to do a whole pile of work when process A passes a file descriptor to process B. The sheer work involved might well bog down a Unix system that tried to make really heavy use of file descriptor passing to implement various sorts of security.
What I had not realized until I read the Google article is that in systems where processes hold capability tokens that are not bound to the particular process, being able to read a process's memory is a privilege escalation attack. If you can read the memory of a process with a capability token, you can read the token's value and then use it yourself. The system's kernel doesn't know any better, and in fact it not knowing is a significant point of the whole design.
This matters for two reasons. First, reading process memory is often an explicit feature of systems to allow for debugging (it's hard to do it otherwise). Second, in practice there have been all sorts of attacks that cause processes to leak some amount of memory to the attacker and often these disclosure leaks are considered relatively harmless (although not always; you may have heard about Heartbleed). In a capability based system, guarding against both of these issues clearly has to be a major security concern (and I'm not sure how you handle debugging).
(In many systems you probably don't need to use the capabilities in the process that read them out of someone's memory. If they really are process independent, you can read them out in one process, transfer them through the Internet, and pass them back in to or with a completely separate process that you establish in some other manner. All that matters is that the process can give the kernel the magic token.)