A broad overview of how ZFS is structured on disk
When I wrote yesterday's entry, it became clear that I didn't understand as much about how ZFS is structured on disk (and that this matters, since I thought that ZFS copy on write updates updated a lot more than they do). So today I want to write down my new broad understanding of how this works.
(All of this can be dug out of the old, draft ZFS on-disk format specification, but that spec is written in a very detailed way and things aren't always immediately clear from it.)
Almost everything in ZFS is a DMU object. All objects are defined by a dnode, and object dnodes are almost always grouped together in an object set. Object sets are themselves DMU objects; they store dnodes as basically a giant array in a 'file', which uses data blocks and indirect blocks and so on, just like anything else. Within a single object set, dnodes have an object number, which is the index of their position in the object set's array of dnodes.
(Because an object number is just the index of the object's dnode
in its object set's array of dnodes, object numbers are basically
always going to be duplicated between object sets (and they're
always relative to an object set). For instance, pretty much every
object set is going to have an object number ten, although not all
object sets may have enough objects that they have an object number
One corollary of this is that if you ask
zdb to tell you about
a given object number, you have to tell
zdb what object set you're
talking about. Usually you do this by telling
zdb which ZFS
filesystem or dataset you mean.)
Each ZFS filesystem has its own object set for objects (and thus dnodes) used in the filesystem. As I discovered yesterday, every ZFS filesystem has a directory hierarchy and it may go many levels deep, but all of this directory hierarchy refers to directories and files using their object number.
ZFS organizes and keeps track of filesystems, clones, and snapshots through the DSL (Dataset and Snapshot Layer). The DSL has all sorts of things; DSL directories, DSL datasets, and so on, all of which are objects and many of which refer to object sets (for example, every ZFS filesystem must refer to its current object set somehow). All of these DSL objects are themselves stored as dnodes in another object set, the Meta Object Set, which the uberblock points to. To my surprise, object sets are not stored in the MOS (and as a result do not have 'object numbers'). Object sets are always referred to directly, without indirection, using a block pointer to the object set's dnode.
(I think object sets are referred to directly so that snapshots can freeze their object set very simply.)
The DSL directories and datasets for your pool's set of filesystems form a tree themselves (each filesystem has a DSL directory and at least one DSL dataset). However, just like in ZFS filesystems, all of the objects in this second tree refer to each other indirectly, by their MOS object number. Just as with files in ZFS filesystems, this level of indirection limits the amount of copy on write updates that ZFS had to do when something changes.
PS: If you want to examine MOS objects with
zdb, I think you do
it with something like '
zdb -vvv -d ssddata 1', which will get
you object number 1 of the MOS, which is the MOS object directory.
If you want to ask
zdb about an object in the pool's root filesystem,
zdb -vvv -d ssddata/ 1'. You can tell which one you're
getting depending on what
zdb prints out. If it says 'Dataset
mos [META]' you're looking at objects from the MOS; if it says
'Dataset ssddata [ZPL]', you're looking at the pool's root filesystem
(where object number 1 is the ZFS master node).
PPS: I was going to write up what changed on a filesystem write, but then I realized that I didn't know how blocks being allocated and freed are reflected in pool structures. So I'll just say that I think that ignoring free space management, only four DMU objects get updated; the file itself, the filesystem's object set, the filesystem's DSL dataset object, and the MOS.
(As usual, doing the research to write this up taught me things that I didn't know about ZFS.)