Why non-linear directories have not been popular in filesystems
October 12, 2010
You might wonder why non-linear directories are so complicated that they haven't been used for most of Unix's history.
Computer science has a lot of data structures that are much better for both lookup and updates than unordered linear arrays. However, pretty much all of them are designed with the assumption that they're living in RAM, and even today RAM has very different characteristics from disks. Specifically, in RAM both fine-grained lookups and fine-grained shuffling of entries are relatively inexpensive. Neither is true for things that are on disk so on disk data structures need to be designed to minimize the amount of IO, especially the amount of random IO. You can do this, but for much of Unix's life this meant that the job was not as simple as implementing a simple and known data structure.
(One wrinkle here is that these data structures should keep working well even in a deliberately hostile environment, one where an attacker is trying to create a worst-case set of directory contents. This is a situation generally not faced by in-RAM algorithms.)
Next come the complications. Pretty much any efficient disk based fast
lookup data structure is going to have rebalancing requirements, where
adding or removing an entry may require shuffling other entries around.
The first complication is that filesystems have to worry about untimely
interruptions when they are partway through such rebalancing operations.
In fact, they have to stay broadly consistent if such interruptions
happen; after all, users will be very unhappy if '
Second, a filesystem probably wants to allow concurrent updates to
directories while a program such as
(This is somewhat challenging even for copy on write filesystems because
you don't want
The next question is whether non-linear directories are worth it in general on typical systems. A contiguous linear directory of even semi-modest size can often be completely read in a single disk IO, and after it's in memory a linear scan usually takes trivial time compared to disk IO; the degenerate case is a single-block directory. This means that there is a minimum directory size (in number of entries and on-disk size) below which non-linear directories get you no benefit because searching them takes just as much disk IO as searching a linear directory. How many directories on a typical system are over this size, and how much are they over? In addition Unix systems have usually had very high cache hit ratios for 'hot' directories, which reduces the IO advantage that non-linear directories have.
(There are certainly situations where non-linear directories win big, but these situations tend to be uncommon precisely because those situations have been big performance losers for much of Unix's life.)
Linear directories are not as efficient as better data structures, but they are much, much simpler to implement in a filesystem. Many of these issues have straightforward answers and often simple code will do the right thing. Non-linear directories are generally going to be much more complicated, with much more code and many more issues, and they don't get you very much except in uncommon situations. It should not be a surprise that most Unix filesystem implementors have had higher priorities and most filesystems have used some form of linear directories.
(Honesty in blogging time: I'm not a filesystem designer and I haven't gone looking to find out what the state of the art is here. It's possible that modern designs have found simple and effective solutions to these issues, so the problem isn't as hard as I think it is.)
* * *
Atom feeds are available; see the bottom of most pages.