Filesystems versus general tree structures
A while back I read Felix Kohlgrüber's The Tree Structure of File Systems (via). As I read it, it argues that filesystem directories should be a richer and more general data structure, one that could contain both data and children (which would make web paths map better on to the 'filesystem API'). As it happens, I think there is a relatively good reason that directories are organization this way, one beyond simple history.
To put it simply, filesystem directories are optimized for out of memory traversal in order to look up names. Generally their actual representation is some form of data structure filled with <name, reference> pairs, where the reference tells you where (on disk) to find almost all substantive information about the name. In the old days and in basic filesystems, the containing data structure was a list; in modern filesystems, it's usually some form of balanced tree (sometimes once the directory is big enough). Directories are optimized to be searched through with as few disk reads as possible, because disk reads are assumed to be expensive.
(Modern filesystems often put some sufficiently commonly used metadata about each name into the directory as well, because that way you can tell programs about it without them having to a separate disk read, possibly a separate disk read for each name in the directory.)
If you augment this basic data structure with data content associated with the directory (well, the directory name) itself, you don't want to put this new data in line with the lookup data structure, where it will force you to read more data from disk in order to do the same traversals. Instead it's mostly likely going to be referred to separately. There will be one set of disk storage associated with the inventory of the directory's children and a second set of disk storage for the data content. Entities without children will have a 'no such thing' marker for their first sort of disk storage and ones without data will have a 'no such thing' marker for their second. If a lot of filesystem entities only have one or the other (ie, if they're conventional files or directories), then a lot of the time having two slots is a waste of (limited) space in an area where filesystems have traditionally cared about (in the metadata kept for most or all filesystem entities).
More broadly, I think it's a mistake to look at filesystems through the eyes of general tree structures. Filesystems originated in a very constrained situation and continue to be focused on fairly constrained one, one where any indirect reference to something is very slow and the less that you need to access the better.
(It's true that modern memory references have gotten slower and slower relative to raw CPU speed, but they're still faster than even NVMe speeds. Plus, a lot of in memory data structures are still not being designed by programmers to minimize references and pack things as densely as possible, for various reasons.)
|
|