Wandering Thoughts

2018-12-13

Some new-to-me features in POSIX (or Single Unix Specification) Bourne shells

As I mentioned when I found Bourne shell arithmetic to be pretty pleasant, I haven't really paid attention to what things are now standard POSIX Bourne shell features. In fact, it's more than that; I don't think I've really appreciated that POSIX and then the Single Unix Specification actually added very much to the venerable Bourne shell. I knew that shell functions were standard, and then there was POSIX command substitution, but then I sort of stopped. In light of discovering that shell arithmetic is now POSIX standard, this view is clearly out of date, so I've decided to actually skim through the POSIX/SUS shell specification and see what new things I want to remember to use in the future. In the process I found an addition that surprises me.

First (and perhaps obviously), the various character classes such as [:alnum:] are officially supported in shell wildcard expansion and matching. I'm not a fan of writing, say, '[[:upper:]]' instead of '[A-Z]', but the latter has some dangerous traps in some shells, including shells that are commonly found as /bin/sh in some environments.

The big new feature that I should probably plan to make use of is the various prefix and suffix pattern substitutions, such as '${var%%word}'. To a fair extent these let you do in shell things that you previously had to turn to programs like basename and dirname. For instance, in a recent script I wanted the bare program name without any full path, so I used:

prog="${0##*/}"

This feels one part clever and half a part perhaps too clever, but I hope it's an idiom. Another use of this is to perform inline pattern matching in an if statement, for example to check if a parameter is a decimal number:

if [ -n "${OPTARG##*[!0-9]*" ]; then
  echo "$prog: argument not a number" 1>&2
  exit 1
fi

I previously would have turned to case statements for this, which is more awkward. Again, hopefully this is not too clever.

(I learned this trick from Stackoverflow answers, perhaps this one or this one.)

The Single Unix Specification actually has some useful and interesting examples for the prefix and suffix pattern substitutions, along with some of the other substitutions.

Next, as pointed in a comment back in 2011 here, POSIX arithmetic supports hex numbers with a leading 0x, which means that it can be used as a quick hex to decimal converter in addition to hex math calculations. I don't know if there's any way to do decimal to hex output with builtins alone; I suspect that the best way is with printf. The arithmetic operators are available are actually pretty extensive, including 'a ? b : c' for straightforward conditionals.

Unfortunately, while POSIX sh has string length (with '${#var}'), it doesn't seem to have either a way to count the number of $IFS_ separated words in a variable or to trim off an arbitrary number of leading or trailing spaces from one. You can get both through brute force with simple shell functions, but I'm probably better off avoiding situations when I need either.

The one feature in POSIX sh that genuinely surprises me is tilde username expansion. I knew this was popular for interactive use in shells but I would have expected POSIX to not care and to primarily focus on shell scripts, where I at least have the impression it's not very common. But there it is, and the description doesn't restrict it to interactive sessions either; you can use '~<someuser>' in your shell scripts if you want to. I probably won't, though, especially since we have our own local solution for this.

(The version of the Single Unix Specification that I'm looking at here is the 2017 version, which will no doubt go out of data just as the 2008, 2013, and 2016 versions did.)

PosixShellNewFeatures written at 00:12:16; Add Comment

2018-11-21

What I really miss when I don't have X across the network

For reasons beyond the scope of this entry, I spent a couple of days last week working from home. One of the big differences when I do this is that I don't have remote X; instead I wind up doing everything over SSH. At a nominal level the experience is much the same, partly because I've deliberately arranged it that way; using sshterm to start a SSH session to a host is very similar to using rxterm to start an xterm on it, for example. But at a deeper level there are two things I wound up really missing.

The obvious thing I missed was exmh, which is the core of how I efficiently deal with email at work. Exmh is text based so it works well within the limitations of modern X network transparency; at work I run it on one of our login servers, with direct access to my email, and it displays on my desktop. In theory the modern replacement for exmh and this style of working would be a local IMAP mail client, if I could find a Linux one that I liked.

(I mean, apart from the whole thing where I'm extremely attached to (N)MH and don't want to move to IMAP any sooner than I have to. An alternate approach would be to find and set up some good text-mode MH visual client, probably GNU Emacs' MH-E, which I used to use years ago.)

But the surprising subtle thing that I wound up missing was the ability to open up a new xterm on the remote machine from within my current session. While starting an xterm this way obviously skips logging in, the real great advantage of doing this is that the new xterm completely inherits my current context, both my current directory and my current privileges (if I'm su'd to root, for example, which is when this is especially handy). It is in a way the Unix shell session equivalent of a browser's 'Open in New Tab/Window', and it's useful for much the same reasons; it gives you an additional view on what you're currently doing or about to do.

There is no good replacement for this that I can see outside of remote X or something very similar to it. You can't get it with job control and you can't really get it with screen or tmux, and a remote windowing protocol that deals with entire desktops instead of individual windows would create a completely different environment in general. This makes me sad that in the brave future world of Wayland, there still doesn't seem to be much prospect of remote windows.

(This entry is sort of prompted by reading The X Network Transparency Myth.)

PS: If you want, you can consider this the flipside of my entry X's network transparency has wound up mostly being a failure. X's network transparency is not anywhere near complete, but within the domain of mostly text-focused programs running over 1G LANs it can still deliver very nice benefits. I take advantage of them every day that I'm at work, and miss them when I'm not.

RemoteXWhatIMiss written at 00:14:25; Add Comment

2018-11-10

Character by character TTY input in Unix, then and now

In Unix, normally doing a read() from a terminal returns full lines, with the kernel taking care of things like people erasing characters and words (and typing control-D); if you run 'cat' by itself, for example, you get this line at a time input mode. However Unix has an additional input mode, raw mode, where you read() every character as it's typed (or at least as it becomes available to the kernel). Programs that support readline-style line editing operate in this mode, such as shells like Bash and zsh, as do editors like vi (and emacs if it's in non-windowed mode).

(Not all things that you might think operate in raw mode actually do; for example, passwd and sudo don't use raw mode when you enter your password, they just turn off echoing characters back to you.)

Unix has pretty much always had these two terminal input modes (kernel support for both goes back to at least Research Unix V4, which seems to be the oldest one that we have good kernel source for through tuhs.org). However, over time the impact on the system of using raw mode has changed significantly, and not just because CPUs have gotten faster. In practice, modern cooked (line at a time) terminal input is much closer to raw mode than it was in the days of V7, because over time we've moved from an environment where input came from real terminals over serial lines to one where input takes much more complicated and expensive paths into Unix.

In the early days of Unix, what you had was real, physical terminals (sometimes hardcopy ones, such as in famous photos of Bell Labs people working on Unix in machine rooms, and sometimes 'glass ttys' with CRT displays). These terminals were connected to Unix machines by serial lines. In cooked, line at a time mode, what happened when you hit a character on the terminal was that the character was sent over the serial line, the serial port hardware on the Unix machine read the character and raised an interrupt, and the low level Unix interrupt handler read the character from the hardware, perhaps echoed it back out, and immediately handled a few special characters like ^C and CR (which made it wake up the rest of the kernel) and perhaps the basic line editing characters. When you finally typed CR, the interrupt handler would wake up the kernel side of your process, which was waiting in the tty read() handler. This higher level would eventually get scheduled, process the input buffer to assemble the actual line, copy it to your user-space memory, and return from the read() to user space, at which point your program would actually wake up to handle the new line it got.

(Versions of Research Unix through V7 actually didn't really handle your erase or line-kill characters at interrupt level. Instead they push everything into a 'raw buffer', and only once a CR was typed was this buffer canonicalized by applying the effects of characters to determine the final line that was returned to user level.)

The important thing here is that in line at a time tty input in V7, the only code that had to run for each character was the low level kernel interrupt handler, and it deliberately did very little work. However, if you turned on raw mode all of this changed and suddenly you had to run a lot more code. In raw mode, the interrupt handler had to wake the higher level kernel at each character, and the higher level kernel had to return to user level, and your user level code had to run. On the comparatively small and slow machines that early Unixes ran on, going all the way to user-level code for every character would have been and probably was a visible performance hit, especially if a bunch of people were doing it at the same time.

Things started changing in BSD Unix with the introduction of pseudo-ttys (ptys). BSD Unix needed ptys in order to support network logins over Telnet and rlogin, but network logins and ptys fundamentally change what the character input path looks like in practice. Programs reading from ptys still ran basically the same sort of code in the kernel as before, with a distinction between low level character processing and high level line processing, but now getting characters to the pty wasn't just a matter of a hardware interrupt. For a telnet or rlogin login, the path looked something like this:

  • the network card gets a packet and raises an interrupt.
  • the kernel interrupt handler reads the packet and passes it to the kernel's TCP state machine, which may not run entirely at interrupt level and is in any case a bunch of code.
  • the TCP state machine eventually hands the packet data to the user-level telnet or rlogin daemon, which must wake up to handle it.
  • the woken-up telnetd or rlogind injects the new character into the master side of the pty with a write() system call, which percolates down through various levels of kernel code.

In other words, with logins over the network, a bunch of code, including user-level code, had to run for every character even for line at a time input.

In this new world, having the shell or program that's reading input from the pty operate in line at a time mode remained somewhat more efficient than raw mode but it wasn't anywhere near the difference in the amount of code that it was (and is) for terminals connected over serial lines. You weren't moving from no user level wakeups to one; you were moving from one to two, and the additional wakeup was on a relatively simple code path (compared to TCP packet and state handling).

(It's a good thing Vaxes were more powerful than PDP-11s; they needed to be.)

Things in Unix have only gotten worse for the character input path since then. Modern input over the network is through SSH, which requires user-level decryption and de-multiplexing before you end up with characters that can be written to the master side of the pseudo-tty; the network input may also involve kernel level firewall checks or even another level of decryption from a VPN (either at kernel level or at user level, depending on the VPN technology). Windowing systems such as X or Wayland add at least two processes to the stack, as generally the window server has to read and process the keystroke and then pass it to the terminal window process (as a generalized event). Sometimes there are more processes, and keyboard event handling is generally complicated in general (which means that there's a lot of code that has to run).

I won't say that character at a time input has no extra overhead in Unix today, because that's not quite true. What is true is that the extra overhead it adds is now only a small percentage of the total cost (in time and CPU instructions) of getting a typed character from the keyboard to the program. And since readline-style line editing and other features that require character at a time input add real value, they've become more and more common as the relative expensive of providing them has fallen, to the point where it's now a bit unusual to find a program that doesn't have readline editing.

The mirror image of this modern state is that back in the old days, avoiding raw mode as much as possible mattered a lot (to the point where it seems that almost nothing in V7 actually uses its raw mode). This persisted even into the days of 4.x BSD on Vaxes, if you wanted to support a lot of people connected to them (especially over serial terminals, which people used for a surprisingly long time). This very likely had a real influence on what sort of programs people developed for early Unix, especially Research Unix on PDP-11s.

PS: In V7, the only uses of RAW mode I could spot were in some UUCP and modem related programs, like the V7 version of cu.

PPS: Even when directly connected serial terminals started going out of style for Unix systems, with sysadmins and other local users switching to workstations, people often still cared about dial-in serial connections over modems. And generally people liked to put all of the dial-in users on one machine, rather than try to somehow distribute them over a bunch.

RawTtyInputThenAndNow written at 19:20:34; Add Comment

2018-10-20

The original Unix ed(1) didn't load files being edited into memory

These days almost all editors work by loading the entire file (or files) that you're editing into memory, either into a flat buffer or two or into some more sophisticated data structure, and then working on them there. This approach to editing files is simple, straightforward, and fast, but it has an obvious limitation; the file you want to edit has to fit into memory. These days this is generally not much of an issue.

V7 was routinely used on what are relatively small machines by modern standards, and those machines were shared by a fairly large number of people, so system memory was a limited resource. Earlier versions of Research Unix had to run on even smaller machines, too. On one of those machines, loading the entire file you wanted to edit into memory was somewhere between profligate and impossible, depending on the size of the file and the machine you were working on. As a result, the V7 ed does not edit files in memory.

The V7 ed manpage says this explicitly, although it's tempting to regard this as hand waving. Here's the quote:

Ed operates on a copy of any file it is editing; [...]. The copy of the text being edited resides in a temporary file called the buffer.

The manual page is being completely literal. If you started up V7 in a PDP-11 emulator and edited a file with ed, you would find a magic file in /tmp (called /tmp/e<something>, the name being created by the V7 mktemp()). That file is the buffer file, and you will find much or all of the file you're editing in it (in some internal format that seems to have null bytes sprinkled around through it).

(V6 is substantially the same, so you can explore this interactively here. I was surprised to discover that V6 doesn't seem to have sed.)

I've poked through the V7 ed.c to see if I could figure out what ed is doing here, but I admit the complete lack of comments has mostly defeated me. What I think it's doing is only allocating and storing some kind of index to where every line is located, then moving line text in and out of a few small 512-byte buffers as you work on them. As you add text or move things around, I believe that ed writes new copies of the line(s) you've touched to new locations in the buffer file, rather than try to overwrite the old versions in place. The buffer file has a limit of 256 512-byte blocks, so if you do enough editing of a large enough file I believe you can run into problems there.

(This agrees with the manpage's section on size limitations, where it says that ed has a 128 KB limit on the temporary file and the limit on the number of lines you can have is the amount of core, with each line taking up one PDP-11 'word' (in the code this is an int).)

Exploring the code also led me to discover how ed handled errors internally, which is by using longjmp() to jump back to main() and re-enter the main command loop from there. This is really sort of what I should have expected from a V7 program; it's straight, minimal, and it works. Perhaps it's not how we'd do that today, but V7 was a different and smaller place.

PS: If you're reading the V7 ed source and want to see where this is, I think it runs through getline(), putline, getblock(), and blkio(). I believe that the tline variable is the offset that the next line will be written to by putline(), and it gets stored in the dynamically allocated line buffer array that is pointed to by zero. The more I look at it, the more that the whole thing seems pretty clever in an odd way.

(My looking into this was sparked by Leah Neukirchen's comment on my entry on why you should be willing to believe that ed is a good editor. Note that even if you don't hold files in memory, editing multiple files at once requires more memory. In ed's scheme, you would need to have multiple line-mapping arrays, one for each file, and probably you'd want multiple buffer files and some structures to keep track of them. You might also be more inclined to do more editing operations in a single session and so be more likely to run into the size limit of a buffer file, which I assume exists for a good reason.)

EdFileNotInMemory written at 01:17:05; Add Comment

2018-10-18

Why you should be willing to believe that ed(1) is a good editor

Among the reactions to my entry on how ed(1) is no longer a good editor today was people wondering out loud if ed was ever a good editor. My answer is that yes, ed is and was good editor in the right situations, and I intend to write an entry about that. But before I write about why ed is a good editor, I need to write about why you should be willing to believe that it is. To put it simply, why you should believe that ed is a good editor has nothing to do with anything about its technical merits and everything to do with its history.

Ed was created and nurtured by the same core Bell Labs people who created Unix, people like Dennis Ritchie and Ken Thompson. Ed wasn't their first editor; instead, it was the end product of a whole series of iterations of the same fundamental idea, created in the computing environment of the late 1960s and early to mid 1970s. The Bell Labs Unix people behind ed were smart, knew what they were doing, had done this many times before, had good taste, were picky about their tools, used ed a great deal themselves, and were not afraid to completely redo Unix programs that they felt were not up to what they should be (the Unix shell was completely redesigned from the ground up between V6 and V7, for example). And what these people produced and used was ed, not anything else, even though it's clear that they could have had something else if they'd wanted it and they certainly knew that other options were possible. Ed is clearly not the product of limited knowledge, imagination, skill, taste, or indifference to how good the program was.

It's certainly possible to believe that the Bell Labs Research Unix people had no taste in general, if you dislike Unix as a whole; in that case, ed is one more brick in the wall. But if you like Unix and think that V7 Unix is well designed and full of good work, it seems a bit of a stretch to believe that all of the Bell Labs people were so uniquely blinded that they made a great Unix but a bad editor, one that they didn't recognize as such even though they used it to write the entire system.

Nor do I think that resource constraints are a convincing explanation. While the very limited hardware of the very earliest Unix machines might have forced early versions of ed to be more limited than prior editors like QED, by the time of V7, Bell Labs was running Unix on reasonably good hardware for the time.

The conclusion is inescapable. The people at Bell Labs who created Unix found ed to be a good editor. Since they got so much else right and saw so many things so clearly, perhaps we should consider that ed itself has merits that we don't see today, or don't see as acutely as they did back then.

EdBelieveGoodEditor written at 00:42:26; Add Comment

2018-09-22

The ed(1) command in the Single Unix Specification (and the SVID)

When I wrote my entry on some differences between various versions of ed(1), I forgot to check the Single Unix Specification to see if it has anything to say about ed. It turns out that it does, and ed is part of the 'Shell & Utilities' section.

SUS ed is mostly the same as FreeBSD ed, which is kind of what I think of as 'the baseline modern ed'. SUS ed requires that s support a bunch of flags for printing the results (only GNU ed documents supporting them all), but it doesn't require a z command (to print paginated output). Interestingly, SUS requires that ed support a prompt and extra help, and that it print warnings if you try to do something that would lose a modified buffer. SUS ed is only required to support SUS Basic Regular Expressions, while all modern eds go at least somewhat beyond this; FreeBSD ed supports '\<' and '\>', for example.

One area of ed's history that I don't know very much about is how it evolved in System III and System V. SCO's website (of all people) has a PDF version of the System V Interface Definition online here, and for as long as it lasts you want to look in volume 2 for the manpage for ed. The SVID ed is mostly the same as FreeBSD ed, and in particular it has '\<' and '\>' in its regular expressions, unlike SUS ed. Its s command doesn't support the l, n, or p flags (required by SUS but not in FreeBSD). It does have prompting and help. I think that ed is the only editor required by the SVID, which may explain why the System V people enhanced it over the V7 and BSD eds; in BSD, the development of vi (and before it ex) probably made ed a relatively secondary editor.

Conveniently, the SUS ed documentation includes a rationale section that includes a discussion of historical differences between BSD and SVID behavior, commands supported, and so on. Of course, 'BSD' in the SUS basically means 'UCB BSD through 4.4 or so', as I don't think the SUS looks at what modern FreeBSD and OpenBSD are doing any more than it looks at Linux or GNU programs.

(My previous entry and this one only look at obvious differences like commands supported. I'm sure that there are plenty of subtle behavior differences between various versions of ed, both old and modern ones; the SUS ed rationale points out some of them, and GNU ed's --traditional switch suggests other points of difference for it.)

EdInSingleUnixSpecAndSVID written at 22:33:42; Add Comment

Some differences between various versions of ed(1)

Today, the versions of ed(1) that people are most likely to encounter and experiment with are GNU ed on Linux, and FreeBSD ed and OpenBSD ed on their respective systems. GNU ed is a new implementation written from scratch, while I believe that the *BSD ed is a descendant from the original V7 ed by way of 4.x BSD ed, such as 4.4BSD ed. However, a great deal of what's said about ed(1), especially in stuff from the 1980s, is basically about the original V7 ed. This sort of matters because these various versions have added and changed some things, so the experience with the original V7 ed is not quite the same as what you'd have today.

Every modern version of ed offers a '-p' flag that provides a command prompt (and a P command to toggle the prompt on and off, with a default prompt of '*'), but this is not in either V7 ed or 4.4BSD ed. This idea appeared in some versions of ed very early; for example, in the early 1980s people at the University of Toronto modified the early versions of ed from V7 and 4.x BSD into 'UofT ed', and one of its standard features was that it basically always had a command prompt.

(As far as I know, UofT ed was strongly focused on interactive use, especially by undergraduate students.)

Line addressing is almost the same in all versions. All modern versions support a bare ';' to mean 'from here to the end of the file', and the *BSD versions support a bare '%' as an alias for a bare ',' (ie, 'the entire file'). Modern versions of ed support more regular expression features (such as character classes), but unsurprisingly haven't changed the basics. GNU ed has the most features here, but OpenBSD and FreeBSD turn out to differ slightly from each other in their choices for some syntax.

GNU ed has the most additions and changes in commands. It's the only ed that has a cut buffer for transferring content between files (the x and y commands), # as a comment command, and it has various options to print out lines that a s command has acted on.

Compared to modern versions of ed, V7 ed and 4.4BSD ed don't have a version of the e, r, or w commands that run a program instead of using a file (modern 'e !<command>' et al), G or V commands for interactively editing lines that match or don't match a regular expression, the h or H commands to actually explain errors, or the n command to print lines with line numbers. V7 ed doesn't have either z (to print the buffer in pages) or wq, but 4.4BSD ed has both. The V7 s command is quite minimal, with no support for printing out the lines that s operated on; 4.4 BSD may have some support for this, although it's not clearly documented. Both V7 and 4.4BSD only let you substitute the first match or all matches in the line; all modern eds let you substitute the Nth match if you want. According to its manpage, V7 ed would let you do a q or e with a modified buffer without any warning; everything else warns you and requires you to do it again. V7 ed also has a limited u command that only applies to the last line edited; from 4.4BSD onward u was extended to undo all effects of multi-line commands like d and g.

Update: V7 ed has the E command, which implicitly mentions that ed does warn you on e with a modified buffer; I suspect it also warns you on a q. It's just that V7 doesn't feel that this needs to be mentioned in the manpage. Everyone else documents it explicitly.

In general, modern versions of ed are more friendly and powerful than the original V7 ed and even than 4.4BSD ed, but they probably aren't substantially so. The experience of using ed today is thus pretty close to the original V7 experience, especially if you don't turn on a prompt or extended help information about errors.

I was actually a bit surprised by how relatively little ed has changed since V7, and also how comparatively much has changed since 4.4 BSD. Apparently a number of people in both the *BSDs and GNU really cared about making ed actively friendlier for interactive editing, since prompts and more help only appear after 4.4BSD.

(I suspect that modern versions of ed can edit larger files and ones with longer lines than the original V7 ed, but you probably don't want to do that anyway. V7 ed appears to have had some support for editing files larger than could fit easily in memory, judging by some cautions in its manpage.)

Sidebar: UofT ed

Now that I've carefully read its manpage, UofT ed is an interesting departure from all of this. It has a number of additions to its regular expressions; '\!' matches control characters apart from tabs, '\_' matches a non-empty sequence of tabs and spaces, '\{' and '\}' match the start and end of 'identifiers', '\?' is a shortest-match version of '*', and it has '|' for regular expression alternation, which not even GNU ed has opted to add. In line addressing, UofT ed added '&', which means 'a page of lines'. For commands, it added b which is essentially the modern z, a h command to print help on the last error message or on various topics, a o command to set some options including whether regular expressions are case-independent, and a special z 'zap' command that allows 'interactive' modification of one line. It also supports ! in e, r, and w and allows specifying which match to use in s expressions.

The UofT ed zap command is especially interesting because it's an attempt to provide a visual way of modifying a line without departing from ed's fundamental approach to editing; in other words, it's an attempt to deal with ed's fundamental limitation. I'm going to quote the start of the manpage's documentation to give you the flavour:

The zap command allows one line at a time to be modified according to graphical requests. The line to be modified is typed out, and then the modify request is read from the terminal (even if the z command is in a global command); this is done repeatedly until the modify request is empty. Generally each character in the request specifies how to modify the character immediately above it, in the original line, as described in the following table.

Zap allows you to delete characters, replace them, overwrite or insert characters, or replace the entire rest of the line with something else, and you could iterate this until you were happy with the result. Because I can, here is an authentic example of z in action:

; ed
[...]
*p
abc def ghi
*z
abc def ghi
    ####
abc ghi
^def 
def abc ghi
        fred
def abc fred
    $afunc(a, b)
def afunc(a, b)

*

This is a kind of silly example, but you can see how we can successively modify the line while still being able to see what we're doing. This is an especially clever approach because ed (the program) doesn't have to switch to character at a time input to implement it; ed is still reading a line at a time, but it's created a way where you can use this visually.

(Interested parties are encouraged to implement this for GNU ed, and if you are one I can give you a complete description of the various characters for z. You'd have to pick another command letter, though, since modern eds use z for paginated output of the buffer.)

EdVersionsDifferences written at 20:51:30; Add Comment

2018-09-02

NFS directory reading and directory file type information

NFS has always had an operation to read directories (unsurprisingly called READDIR). In NFS v2, this operation simply returned a list of names (and 'fileids', ie inode numbers). One of the things that NFS v3 introduced was an extended version of this, called READDIRPLUS that returns some additional information along with the directory listing. This new operation was motivated by the observation that NFS clients often immediately followed a READDIR operation by a bunch of additional NFS calls to get additional information on many or all of the names in the directory. In light of the fact that file type information is available in Unix directories at least some of the time (on many Unixes), I found myself wondering if this file type information was sufficient for an NFS server to implement READDIRPLUS, so that such a Unix could satisfy READDIRPLUS requests purely from reading the directory itself.

As far as I can see, unfortunately the answer is that it isn't. Directory file type information only gives you the file type of each name, while NFS v3's READDIRPLUS operation is specified in RFC 1813 as returning full information on each name, what the standard calls a fattr3 (defined on page 22). This is basically the same as what you get from stat(), and this implies that the NFS server has to read each inode to pull up this information. That's kind of a pity, at least for NFS v3, and one of the consequences is that you can't get the same type of high-efficiency file type scanning over NFS v3 as you can locally.

We have historically used NFS v3 only and so I default to mostly looking at it. However, there's also NFS v4 (specified in RFC 7530), and once I looked at it, it turns out to be different in an important way. NFS v4 has only a READDIR operation, but it has been defined to allow the NFS client to specify what attributes of each name it wants to get back. A NFS v4 client can thus opt to ask for only fileids and file type information, which permits an NFS v4 server to satisfy the READDIR request purely from reading the directory, without having to stat() each file in the directory. Even in the possible case where file type information isn't known for all files, the NFS v4 server would only have to stat() some files, not all of them.

With that said, I don't know if NFS v4 clients actually make such limited READDIR requests or if they actually ask NFS v4 servers to give them enough extra information that the server has to stat() everything. Sadly, one thing clients could sensibly want to know to save time is the NFS filehandle of each name, and the filehandle generally requires information that needs a stat().

(Learning this about NFS v4 may make us more interested in trying to use it, assuming that we can make everything work in NFS v4 with traditional 'sec=sys' Unix style 'trust the client's claims about UIDs and GIDs' security.)

NFSReaddirAndDType written at 01:38:19; Add Comment

2018-08-25

The history of file type information being available in Unix directories

The two things that Unix directory entries absolutely have to have are the name of the directory entry and its 'inode', by which we generically mean some stable kernel identifier for the file that will persist if it gets renamed, linked to other directories, and so on. Unsurprisingly, directory entries have had these since the days when you read the raw bytes of directories with read(), and for a long time that was all they had; if you wanted more than the name and the inode number, you had to stat() the file, not just read the directory. Then, well, I'll quote myself from an old entry on a find optimization:

[...], Unix filesystem developers realized that it was very common for programs reading directories to need to know a bit more about directory entries than just their names, especially their file types (find is the obvious case, but also consider things like 'ls -F'). Given that the type of an active inode never changes, it's possible to embed this information straight in the directory entry and then return this to user level, and that's what developers did; on some systems, readdir(3) will now return directory entries with an additional d_type field that has the directory entry's type.

On Twitter, I recently grumbled about Illumos not having this d_type field. The ensuing conversation wound up with me curious about exactly where d_type came from and how far back it went. The answer turns out to be a bit surprising due to there being two sides of d_type.

On the kernel side, d_type appears to have shown up in 4.4 BSD. The 4.4 BSD /usr/src/sys/dirent.h has a struct dirent that has a d_type field, but the field isn't documented in either the comments in the file or in the getdirentries(2) manpage; both of those admit only to the traditional BSD dirent fields. This 4.4 BSD d_type was carried through to things that inherited from 4.4 BSD (Lite), specifically FreeBSD, but it continued to be undocumented for at least a while.

(In FreeBSD, the most convenient history I can find is here, and the d_type field is present in sys/dirent.h as far back as FreeBSD 2.0, which seems to be as far as the repo goes for releases.)

Documentation for d_type appeared in the getdirentries(2) manpage in FreeBSD 2.2.0, where the manpage itself claims to have been updated on May 3rd 1995 (cf). In FreeBSD, this appears to have been part of merging 4.4 BSD 'Lite2', which seems to have been done in 1997. I stumbled over a repo of UCB BSD commit history, and in it the documentation appears in this May 3rd 1995 change, which at least has the same date. It appears that FreeBSD 2.2.0 was released some time in 1997, which is when this would have appeared in an official release.

In Linux, it seems that a dirent structure with a d_type member appeared only just before 2.4.0, which was released at the start of 2001. Linux took this long because the d_type field only appeared in the 64-bit 'large file support' version of the dirent structure, and so was only return by the new 64-bit getdents64() system call. This would have been a few years after FreeBSD officially documented d_type, and probably many years after it was actually available if you peeked at the structure definition.

(See here for an overview of where to get ancient Linux kernel history from.)

As far as I can tell, d_type is present on Linux, FreeBSD, OpenBSD, NetBSD, Dragonfly BSD, and Darwin (aka MacOS or OS X). It's not present on Solaris and thus Illumos. As far as other commercial Unixes go, you're on your own; all the links to manpages for things like AIX from my old entry on the remaining Unixes appear to have rotted away.

Sidebar: The filesystem also matters on modern Unixes

Even if your Unix supports d_type in directory entries, it doesn't mean that it's supported by the filesystem of any specific directory. As far as I know, every Unix with d_type support has support for it in their normal local filesystems, but it's not guaranteed to be in all filesystems, especially non-Unix ones like FAT32. Your code should always be prepared to deal with a file type of DT_UNKNOWN.

(Filesystems can implement support for file type information in directory entries in a number of different ways. The actual on disk format of directory entries is filesystem specific.)

It's also possible to have things the other way around, where you have a filesystem with support for file type information in directories that's on a Unix that doesn't support it. There are a number of plausible reasons for this to happen, but they're either obvious or beyond the scope of this entry.

DirectoryDTypeHistory written at 00:31:39; Add Comment

2018-08-22

Why ed(1) is not a good editor today

I'll start with my tweet:

Heretical Unix opinion time: ed(1) may be the 'standard Unix editor', but it is not a particularly good editor outside of a limited environment that almost never applies today.

There is a certain portion of Unixdom that really likes ed(1), the 'standard Unix editor'. Having actually used ed for a not insignificant amount of time (although it was the friendlier 'UofT ed' variant), I have some reactions to what I feel is sometimes overzealous praise of it. One of these is what I tweeted.

The fundamental limitation of ed is that it is what I call an indirect manipulation interface, in contrast to the explicit manipulation interfaces of screen editors like vi and graphical editors like sam (which are generally lumped together as 'visual' editors, so called because they actually show you the text you're editing). When you edit text in ed, you have some problems that you don't have in visual editors; you have to maintain in your head the context of what the text looks like (and where you are in it), you have to figure out how to address portions of that text in order to modify them, and finally you have to think about how your edit commands will change the context. Copious use of ed's p command can help with the first problem, but nothing really deals with the other two. In order to use ed, you basically have to simulate parts of ed in your head.

Ed is a great editor in situations where the editor explicitly presenting this context is a very expensive or outright impossible operation. Ed works great on real teletypes, for example, or over extremely slow links where you want to send and receive as little data as possible (and on real teletypes you have some amount of context in the form of an actual printout that you can look back at). Back in the old days of Unix, this described a fairly large number of situations; you had actual teletypes, you had slow dialup links (and later slow, high latency network links), and you had slow and heavily overloaded systems.

However, that's no longer the situation today (at least almost all of the time). Modern systems and links can easily support visual editors that continually show you the context of the text and generally let you more or less directly manipulate it (whether that is through cursoring around it or using a mouse). Such editors are easier and faster to use, and they leave you with more brainpower free to think about things like the program you're writing (which is the important thing).

If you can use a visual editor, ed is not a particularly good editor to use instead; you will probably spend a lot of effort (and some amount of time) on doing by hand something that the visual editor will do for you. If you are very practiced at ed, maybe this partly goes away, but I maintain that you are still working harder than you need to be.

The people who say that ed is a quite powerful editor are correct; ed is quite capable (although sadly limited by only editing a single file). It's just that it's also a pain to use.

(They're also correct that ed is the foundation of many other things in Unix, including sed and vi. But that doesn't mean that the best way to learn or understand those things is to learn and use ed.)

This doesn't make ed a useless, vestigial thing on modern Unix, though. There are uses for ed in non-interactive editing, for example. But on modern Unix, ed is a specialized tool, much like dc. It's worth knowing that ed is there and roughly what it can do, but it's probably not worth learning how to use it before you need it. And you're unlikely to ever be in a situation where it's the best choice for interactive editing (and if you are, something has generally gone wrong).

(But if you enjoy exploring the obscure corners of Unix, sure, go for it. Learn dc too, because it's interesting in its own way and, like ed, it's one of those classical old Unix programs.)

EdNoLongerGoodEditor written at 01:17:53; Add Comment

(Previous 10 or go back to August 2018 at 2018/08/19)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.