Wandering Thoughts archives

2009-07-29

The shift-selection trick in X terminal programs

Seeing this today reminded me of a relatively obscure feature and counter-feature in xterm and similar imitators like gnome-terminal and konsole.

First, the feature (sometimes 'feature'): in order to let text mode programs still have clickable objects, xterm lets programs steal left mouse (button-1) clicks; instead of selecting text, the program gets escape sequences that tell it about the click and it can do whatever it wants with them. The most obvious application is text-mode web browsers like links, which uses it to let you click on links.

(The Xterm Control Sequences documentation calls this 'mouse tracking'.)

This spawned an immediate demand for a counter-feature, and so in xterm and its imitators shift plus left mouse button always selects text, even if a program has put xterm in this special mouse tracking mode. In xterm, all of the usual double and triple click selection tricks work when shifted; your mileage may vary in gnome-terminal et al.

This doesn't seem to be documented in the xterm manpage, so I'm not sure where I learned it; it may have been ambient X knowledge at some point back in the days, and I just picked it up. It's fortunate that it seems to have been well enough known to be copied by the people writing gnome-terminal, konsole, and so on.

(xterm has a great deal of peculiar features that are at best half-known these days, or that usually take careful study of the manpage to spot.)

XtermShiftSelection written at 01:13:27; Add Comment

2009-07-13

Some stuff on NFS access restrictions

Roughly speaking, there's two sorts of access restrictions that an NFS server can put on a client: what filesystems the client can access, and what directories in the filesystems the client can access (this is necessary when you export a subdirectory in the filesystem, instead of the whole filesystem).

(This ignores just firewalling off the client entirely. The NFS server code generally doesn't have any special handling for this, because from its perspective, not allowing someone to talk at all is functionally identical to not allowing them access to any filesystems.)

Per-filesystem access restrictions are generally very solid, because it is easy for the NFS server to tell what filesystem a client is trying to access; the information is generally directly present in the NFS filehandle and cannot be usefully altered by the client. The same is not true of directory restrictions, because most NFS servers do not have any good way of knowing if an inode (and thus an NFS filehandle) falls under a directory hierarchy, so the only way they have of enforcing this limit is by never letting you have a filehandle for something outside your allowed directory hierarchy.

This has two problems: first, we've already seen how well filehandle secrecy works in practice, and second, there have traditionally been any number of creative ways to trick an NFS server into letting you escape from your 'jail' and get a filehandle for somewhere else in the filesystem. Hence the traditional advice that you should always export whole filesystems, or at least not count on subdirectory restrictions for much security.

That the NFS server doesn't really know what directory hierarchy a filehandle falls under is also why you generally can't export two separate bits of the same filesystem to the same client with different options (one read-write, one read-only, for example). If the server allowed you to specify different options, it would have no way of figuring out which set to apply to a given filehandle.

NFSAccessRestrictions written at 23:59:34; Add Comment

2009-07-12

A brief history of NFS server access restrictions

In the beginning, NFS servers had no access restrictions. No, really.

In the early versions of NFS, the kernel NFS code had no access checks; if you had a valid filehandle, the kernel was happy to talk to you, regardless of who you were. What NFS access restrictions existed were all done during the NFS mount process; if you were not authorized to mount the particular thing you were asking for, mountd would not give you a filehandle for it. This was, of course, secure only as long as you couldn't get a filehandle in some other way, and pretty soon it was painfully clear that you could.

(And once you had the filehandle for the top of a filesystem, that was pretty much it, because it wasn't as if that filehandle could easily be changed.)

This sparked a rush to put some sort of NFS access restrictions in the kernel itself. However, Sun had made NFS export permissions very flexible and full of user-level concepts like NIS netgroups; it was clear that you couldn't just push /etc/exports lines into the kernel and be done with it.

At first people tried having mountd add specific permissions (this IP address, this sort of access to this filesystem) to the kernel either when it started or when client machines made (successful) mount requests. There were at least two problems with this; first, for a sufficiently big and permissive NFS server, this could be too much information for the kernel to store, and second, there are situations where this sort of static permission adding isn't good enough and valid clients will get improper access denials.

As a result, all modern systems have moved to some sort of 'upcall' mechanism; when the kernel gets a NFS request that it doesn't already have permissions information for, it pokes mountd to find out if the client is allowed the specific access. The kernel's permission information is effectively only a cache (hopefully big enough to avoid upcalls under normal usage). This allows mountd to have whatever crazy permissions schemes it wants to without complicating the kernel's life.

Of course, this adds a new NFS mount failure mode. At least some kernels cache negative permissions entries (this IP address is not allowed to access this filesystem) so that they don't upcall to mountd all the time for bad clients. Under some situations valid clients can get stuck with such a negative entry, and they can be very persistent. Until the negative entry is cleared somehow, the client is not going to have access to the filesystem, although everything will swear up and down that it has all the necessary permissions, mountd will give it a valid filehandle, and so on.

(We had one client that couldn't mount a crucial filesystem from a Solaris 8 NFS server for months. Fortunately it was a test system.)

NFSServerSecurity written at 02:25:12; Add Comment

2009-07-09

The high-level version of how mounting NFS filesystems works

For reasons that are going to become apparent soon, I need to explain how NFS mounting works.

NFS servers (in the code and protocol sense) just talk NFS, which is to say that you give them an NFS filehandle and tell them what operation you want to do with it and they give you a reply. One of those operations is to look up a name in a directory and give you its filehandle, which is the basic building block that lets a client traverse a filesystem's directory hierarchy.

Thus, at a conceptual level you mount an NFS filesystem by acquiring the filehandle of its root directory (and then remembering it locally). The NFS protocol has no mechanism for doing this, since all NFS operations start with the filehandle; instead, the job is delegated to an entirely separate protocol, the NFS mount protocol ('mountd'). Handling this protocol on the NFS server is traditionally done by a separate user-level program, the (NFS) mount daemon, and some associated daemons.

(It needs more than one daemon because of extra generality. NFS mounting is based on Sun RPC, and Sun RPC requires a bootstrapping process; first you ask the RPC portmapper on which port you can find whatever handles the mountd protocol, and then you go talk to that port to do the actual work. This avoids having to ask IANA for a bunch of port assignments, and back in 1986 or so no one had even started thinking about firewalls so the idea that services might wind up on randomly chosen TCP ports did not fill sysadmins with screaming nightmares. But I digress.)

Traditionally, clients also implement the NFS mount protocol in a separate user-level program (sometimes directly including it in mount, sometimes making it a separate internal program that mount runs for you). That program talks RPC to the NFS mount daemon, basically giving it a path and getting back an NFS filehandle; it then takes that filehandle and passes it to the kernel somehow (along with all of the other information about the mount that the kernel needs), where the kernel NFS client code takes over.

While this may seem peculiar, the advantage of this split is that neither the client nor the server kernels need to have all sorts of complicated code to do things like hostname lookups, RPC portmapping, parsing /etc/exports, and so on, all of which is much easier and more flexible when done in user level code.

(Having a separate NFS mount protocol also keeps the NFS protocol itself simpler and more regular.)

NFSMounts written at 02:03:54; Add Comment

2009-07-02

Finding out when a command in a pipeline fails

Suppose you are in the situation from last entry and want to know whether may-fail actually failed, and you don't want to just split the pipeline up and use a temporary file.

In Bash, a commentator on the last entry pointed out that this is simple: you can use the $PIPESTATUS array variable to see the output status of any command in the pipeline. The same feature is available in zsh, but it uses $pipestatus (lower case), just to be different.

If you want to do this in general Bourne shell, you need some way to communicate the exit status of the command out of pipeline. You could use the very complicated mechanisms from the old comp.unix.shell FAQ, but if I had to do this I would just use a flag file:

rm -f $FAILFILE
(may-fail || touch $FAILFILE) | grep ...

If $FAILFILE exists after the pipeline has finished, may-fail failed.

If you need to distinguish between commands 'failing' due to SIGPIPE and other failures, your life is much more complicated. Fortunately I have never had to do that (or unfortunately, since it means that I have no code to share).

Some people would say that splitting a pipeline up and using temporary files is less elegant and thus less desirable than any of these techniques. I disagree; Bourne shell programming is already too complicated, so you should avoid tricky techniques unless they're absolutely necessary. Using a temporary file is almost never going to kill you and it makes your script easier to follow (especially if you add comments).

GettingPipelineStatus written at 00:28:50; Add Comment

By day for July 2009: 2 9 12 13 29; before July; after July.

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.