Wandering Thoughts

2024-02-19

The flow of activity in the ZFS Intent Log (as I understand it)

The ZFS Intent Log (ZIL) is a confusing thing once you get into the details, and for reasons beyond the scope of this entry I recently needed to sort out the details of some aspects of how it works. So here is what I know about how things flow into the ZIL, both in memory and then on to disk.

(As always, there is no single 'ZFS Intent Log' in a ZFS pool. Each dataset (a filesystem or a zvol) has its own logically separate ZIL. We talk about 'the ZIL' as a convenience.)

When you perform activities that modify a ZFS dataset, each activity creates its own ZIL log record (a transaction in ZIL jargon, sometimes called an 'itx', probably short for 'intent transaction') that is put into that dataset's in-memory ZIL log. This includes both straightforward data writes and metadata activity like creating or renaming files. You can see a big list of all of the possible transaction types in zil.h as all of the TX_* definitions (which have brief useful comments). In-memory ZIL transactions aren't necessarily immediately flushed to disk, especially for things like simply doing a write() to a file. The reason that plain write()s to a file are (still) given ZIL transactions is that you may call fsync() on the file later. If you don't call fsync() and the regular ZFS transaction group commits with your write()s, those ZIL transactions will be quietly cleaned out of the in-memory ZIL log (along with all of the other now unneeded ZIL transactions).

(All of this assumes that your dataset doesn't have 'sync=disabled' set, which turns off the in-memory ZIL as one of its effects.)

When you perform an action such as fsync() or sync() that requests that in-memory ZFS state be made durable on disk, ZFS gathers up some or all of those in-memory ZIL transactions and writes them to disk in one go, as a sequence of log (write) blocks ('lwb' or 'lwbs' in ZFS source code), which pack together those ZIL transaction records. This is called a ZIL commit. Depending on various factors, the flushed out data you write() may or may not be included in the log (write) blocks committed to the (dataset's) ZIL. Sometimes your file data will be written directly into its future permanent location in the pool's free space (which is safe) and the ZIL commit will have only a pointer to this location (its DVA).

(For a discussion of this, see the comments about the WR_* constants in zil.h. Also, while in memory, ZFS transactions are classified as either 'synchronous' or 'asynchronous'. Sync transactions are always part of a ZIL commit, but async transactions are only included as necessary. See zil_impl.h and also my entry discussing this.)

It's possible for several processes (or threads) to all call sync() or fsync() at once (well, before the first one finishes committing the ZIL). In this case, their requests can all be merged together into one ZIL commit that covers all of them. This means that fsync() and sync() calls don't necessarily match up one to one with ZIL commits. I believe it's also possible for a fsync() or sync() to not result in a ZIL commit if all of the relevant data has already been written out as part of a regular ZFS transaction group (or a previous request).

Because of all of this, there are various different ZIL related metrics that you may be interested in, sometimes with picky but important differences between them. For example, there is a difference between 'the number of bytes written to the ZIL' and 'the number of bytes written as part of ZIL commits', since the latter would include data written directly to its final space in the main pool. You might care about the latter when you're investigating the overall IO impact of ZIL commits but the former if you're looking at sizing a separate log device (a 'slog' in ZFS terminology).

ZFSZILActivityFlow written at 21:58:13; Add Comment

2023-10-29

One reason that ZFS can't turn a directory into a filesystem

One of the wishes that I and other people frequently have for ZFS is the ability to take an existing directory (and everything underneath it) in a ZFS filesystem and turn it into a sub-filesystem of its own. One reason for wanting this is that a number of things are set and controlled on a per-filesystem basis in ZFS, instead of on a per-directory basis; if you have a (sub)directory where you want any special value for those, you need to make it a filesystem of its own. Often you may not immediately realize this before the directory exists and has been populated, and you discover the need for the special setting values. Today I realized that one reason ZFS doesn't have this feature is because of how ZFS filesystems are put together.

ZFS is often described as tree structured, and this is generally true; a lot of things in a ZFS pool are organized into a tree of objects. However, while filesystems are a tree at the logical level of directories and subdirectories, they aren't a tree as represented on disk. Directories in ZFS filesystems don't directly point to the disk addresses of their contents; instead, ZFS filesystems have a flat, global table of object numbers (effectively inode numbers) and all directory entries refer to things by object number. Since ZFS is a copy on write filesystem, this level of indirection is quite important in reducing how much has to be updated when a file or a directory is changed.

If ZFS filesystems used tree structured references at the level of directory entries (and we ignored hardlinks), it would make conceptual sense that you could take a directory object, pull it into a new filesystem, and patch its reference in its parent directory. All of the object references in the tree under the directory would stay the same; they would just be in a new container, the new filesystem. Filesystems would essentially be cut points in the overall object tree.

However, you can't make this model work when filesystems have a single global space of object numbers that are used in directory entries. A new filesystem has its own new table of object numbers, and you would have to move all of the objects referred to by the directory hierarchy into this new table, which means you'd have to walk the directory tree to find them all and then possibly update all of the directories if you changed their object numbers as part of putting them in a new object (number) table. This isn't the sort of work that you should be asking a filesystem to do in the kernel; it's much more suited for a user level tool.

Now that I've thought of this, it's even more understandable how ZFS doesn't have this feature, however convenient for me it would be, and how it never will.

(Hardlinks by themselves probably cause enough heartburn to sink a feature to turn a directory into a filesystem, although I can see ways to deal with them if you try hard enough.)

ZFSWhyNotDirectoryToFilesystem written at 22:43:07; Add Comment

2023-09-08

How changing a ZFS filesystem's recordsize affects existing files

The ZFS 'recordsize' property (on filesystems) is a famously confusing ZFS property that is more or less the maximum logical block size of files on the filesystem. However, you're allowed to change it even after the filesystem has been created, so this raises the question of what happens with existing files when you do so. The simple answer is that existing files are unaffected and continue to use the old recordsize. The technical answer can be a lot more complicated, and to understand it I'm going to start from the beginning.

Simplifying slightly, ZFS files (in fact all filesystem objects) are made up of zero or more logical blocks, which are all the same logical size (they may be different physical sizes on disk, for example because of ZFS compression). How big these blocks are is the file's (current) (logical) block size; all files have a logical block size. Normally there are two cases for the block size; either there is one block and it and the logical block size are growing up toward the filesystem's recordsize, or there's more than one logical block and the file's logical block size is frozen; all additional logical blocks added to the file will use the file's logical block size, whatever that is.

Under normal circumstances, a file will get its second logical block (and freeze its logical block size) at the point where its size goes about the filesystem's recordsize, which makes the file's final logical block size be the filesystem's recordsize. Sometimes this will be right away, when the file is created, because you wrote enough data to it on the spot. Sometimes this might be months after it's created, if you're appending a little bit of data to a log file once a day and your filesystem has the default 128 Kbyte recordsize (or a larger one).

So now we have some cases when you change recordsize. The simple one is that all files that have more than one logical block stick at the old recordsize, because that is their final logical block size (assuming you've only changed recordsize once; if you've changed it more than once, you could have a mixture of these sizes).

If a file has only a single logical block, its size (and logical block size) might be either below or above the new recordsize. If the file's single logical block is smaller than the new recordsize, the single block will grow up to that new recordsize and after that create its second logical block and freeze its logical block size. This is just the same as if the file was created from scratch under your new recordsize. This means that if you raise the recordsize, all files that currently have only one block will wind up using your new recordsize.

If the file's single block is larger than your new recordsize, the file will continue growing the logical block size of this single block up until it hits the next power of two size; after that it adds a second logical block and freezes the file's logical block size. So if you started out with a 512k recordsize, wrote a 200k file, set recordsize down to 128k, and continue writing to the file, it will wind up with a 256k logical block size. If you had written exactly 256k before lowering recordsize, the file would not grow its logical block size to 512k, because it would already be at a power of two.

In other words, there is an invariant that ZFS files with more than one block always have a logical block size that is a power of two. This likely exists because it makes it much easier to calculate which logical block a given offset falls into.

This means that if you lower the recordsize, all current files that have only a single block may wind up with an assortment of logical block sizes, depending on their current size and your new recordsize. If you drop recordsize from 128k to 16k, you could wind up with a collection of files that variously have 32k, 64k, and 128k logical block sizes.

(This is another version of an email I wrote to the ZFS mailing list today.)

ZFSRecordsizeChangeEffects written at 23:24:52; Add Comment

2023-05-13

The paradox of ZFS ARC non-growth and ARC hit rates

We have one ZFS fileserver that sometimes spends quite a while (many hours) with a shrunken ARC size, one tens of gigabytes below its (shrunken) ARC target size. Despite that, its ARC hit rate is still really high. Well, actually, that's not surprising; that's kind of a paradox of ARC growth (for both actual size and target size). This is because the combination of two obvious things: the ARC only grows when it needs to, and a high ARC hit rate means that the ARC isn't seeing much need to grow. More specifically, for reads the ARC only grows when there is a read ARC miss. If your ARC target size is 90 GB, your current ARC size is 40 GB, and your ARC hit rate is 100%, it doesn't matter than you have 50 GB of spare RAM, because the ARC has pretty much nothing to put in it.

This means that your ARC growth rate will usually be correlated with your ARC miss rate, or rather your ARC miss volume (which unfortunately I don't think there are kstats for). The other thing ARC growth rate can be correlated with is with your write volume (because many writes go into the ARC on their way to disk, although I'm not certain all of them do). However, ARC growth from write volume can be a transient thing; if you write something and then delete it, ZFS will first put it in the ARC and then drop it from the ARC.

(Deleting large amounts of data that was in the ARC is one way to rapidly drop the ARC size. If your ARC size shrinks rapidly without the target size shrinking, this is probably what's happened. This data may have been recently written, or it might have been read and then deleted.)

This is in a sense both obvious and general. All disk caches only increase their size while reading if there are cache misses; if they don't have cache misses, nothing happens. ZFS is only unusual in that we worry and obsess over the size of the ARC and how it fluctuates, rather than assuming that it will all just work (for good reasons, especially on Linux, but even on Solaris and later Illumos, the ZFS ARC size was by default constrained to much less than the regular disk cache might have grown to without ZFS).

ZFSARCGrowthParadox written at 21:48:38; Add Comment

2023-04-25

Understanding ZFS ARC hit (and miss) kstat statistics

The ZFS ARC exposes a number of kstat statistics about its hit and miss performance, which are obviously quite relevant for understanding if your ARC size and possibly its failure to grow are badly affecting you, or if your ARC hit rate is fine even with a smaller than expected ARC size. Complicating the picture are things like 'MFU hits' and 'MFU ghost hits', where it may not be clear how they relate to plain 'ARC hits'.

There are a number of different things that live in the ZFS ARC, each of which has its own size. Further, the disk blocks in the ARC (both 'data' and 'metadata') are divided between a Most Recently Used (MRU) portion and a Most Frequently Used (MFU) portion (I believe other things like headers aren't in either the MRU or MFU). As covered in eg ELI5: ZFS Caching, the MFU and MRU also have 'ghost' versions of themselves; to simplify, these track what would be in memory if the MFU (or MRU) portion used all of memory.

The MRU, MFU, and the ghost versions of themselves give us our first set of four hit statistics: 'mru_hits', 'mfu_hits', 'mru_ghost_hits', and 'mfu_ghost_hits'. These track blocks that were found in the real MRU or found in the real MFU, in which case they are actually in RAM, or found in the ghost MRU amd MFU, in which case they weren't in RAM but theoretically could have been. As covered in ELI5: ZFS Caching, ZFS tracks the hit rates of the ghost MRU and MFU as signs for when to change the balance between the size of the MRU and MFU. If a block wasn't even in the ghost MFU or MRU, there is no specific kstat for it and we have to deduce that from comparing MRU and MFU ghost hits with general misses.

However, what we really care about for ARC hits and misses is whether the block actually was in the ARC (in RAM) or whether it had to be read off disk. This is what the general 'hits' and 'misses' kstats track, and they do this independently of the MRU and MFU hits (and ghost 'hits'). At this level, all hits and misses can be broken down into one of four categories; demand data, demand metadata, prefetch data, and prefecth metadata (more on this breakdown is in my entry on ARC prefetch stats). Each of these four has hit and miss kstats associated with them, named things like 'demand_data_misses'. As far as I understand it, a 'prefetch' hit or miss means that ZFS was trying to prefetch something and either already found it in the ARC or didn't. A 'demand' read is from ZFS needing it right away.

(This implies that the same ZFS disk block can be a prefetch miss, which reads it into the ARC from disk, and then later a demand hit, when the prefetching paid off and the actual read found it in the ARC.)

In the latest development version of OpenZFS, which will eventually become 2.2, there is an additional category of 'iohits'. An 'iohit' happens when ZFS wants a disk block that already has active IO issued to read it into the ARC, perhaps because there is active prefetching on it. Like 'hits' and 'misses', this has the four demand vs prefetch and data vs metadata counters associated with it. I'm not quite sure how these iohits are counted in OpenZFS 2.1, and some of them may slip through the cracks depending on the exact properties associated with the read (although the change that introduced iohits suggests that they may previously have been counted as 'hits').

If you want to see how your ARC is doing, you want to look at the overall hits and misses. The MRU and MFU hits, especially the 'ghost' hits (which are really misses) strike me as less interesting. If you have ARC misses happening (which leads to actual read IO) and you want to know roughly why, you want to look at the breakdown of the demand vs prefetch and data vs metadata 'misses' kstats.

It's tempting to look at MRU and MFU ghost 'hits' as a percentage of misses, but I'm not sure this tells you much; it's certainly not very high on our fileservers. Somewhat to my surprise, the sum of MFU and MRU hits is just slightly under the overall number of ARC 'hits' on all of our fileservers (which use ZoL 2.1). However, they're exactly the same on my desktops, which run the development version of ZFS on Linux and so have an 'iohits'. So possibly in 2.1, you can infer the number of 'iohits' from the difference between overall hits and MRU + MFU hits.

(I evidently worked much of this out years ago since our ZFS ARC stats displays in our Grafana ZFS dashboards work this way, but I clearly didn't write it down back then. This time around, I'm fixing that for future me.)

ZFSUnderstandingARCHits written at 23:15:30; Add Comment

2023-04-14

The various sizes of the ZFS ARC (as of OpenZFS 2.1)

The ZFS ARC is ZFS's version of a disk cache. Further general information on it can be found in two highly recommended sources, Brendan Gregg's 2012 Activity of the ZFS ARC and Allan Jude's FOSDEM 2019 ELI5: ZFS Caching (also, via). ZFS exposes a lot of information about the state of the ARC through kstats, but there isn't much documentation about what a lot of them mean. Today we're going to talk about some of the kstats related to size of the ARC. I'll generally be using the Linux OpenZFS kstat names exposed in /proc/spl/kstat/zfs/arcstats.

The current ARC total size in bytes is size. The ARC is split into a Most Recently Used (MRU) portion and a Most Frequently Used (MFU) portion; the two sizes of these are mru_size and mfu_size. Note that the ARC may contain more than MRU and MFU data; it also holds other things, so size is not necessarily the same as the sum of mru_size and mfu_size.

The ARC caches both ZFS data (which includes not just file contents but also the data blocks of directories) and metadata (ZFS dnodes and other things). All space used by the ARC falls into one of a number of categories, which are accounted for in the following kstats:

data_size metadata_size
bonus_size dnode_size dbuf_size
hdr_size l2_hdr_size abd_chunk_waste_size

('abd' is short for 'ARC buffered data'. In Linux you can see kstats related to it in /proc/spl/kstat/zfs/abdstats.)

Generally data_size and metadata_size will be the largest two components of the ARC size; I believe they cover data actually read off disk, with the other sizes being ZFS in-RAM data structures that are still included in the ARC. The l2_hdr_size will be zero if you have no L2ARC. There is also an arc_meta_used kstat; this rolls up everything except data_size and abd_chunk_waste_size as one number that is basically 'metadata in some sense'. This combined number is important because it's limited by arc_meta_limit.

(There is also an arc_dnode_limit, which I believe effectively limits dnode_size specifically, although dnode_size can go substantially over it under some circumstances.)

When ZFS reads data from disk, in the normal configuration it stores it straight into the ARC in its on-disk form. This means that it may be compressed; even if you haven't turned on ZFS on disk compression for your data, ZFS uses it for metadata. The ARC has two additional sizes to reflect this; compressed_size is the size in RAM, and uncompressed_size is how much this would expand to if it was all uncompressed. There is also overhead_size, which, well, let's quote include/sys/arc_impl.h:

Number of bytes stored in all the arc_buf_t's. This is classified as "overhead" since this data is typically short-lived and will be evicted from the arc when it becomes unreferenced unless the zfs_keep_uncompressed_metadata or zfs_keep_uncompressed_level values have been set (see comment in dbuf.c for more information).

Things counted in overhead_size are not counted in the compressed and uncompressed size; they move back and forth in the code as their state changes. I believe that the compressed size plus the overhead size will generally be equal to data_size + metadata_size, ie both cover 'what is in RAM that has been pulled off disk', but in different forms.

Finally we get to the ARC's famous target size, the famous (or infamous) 'arc_c' or just 'c'. This is the target size of the ARC; if it is larger than size, the ARC will grow as you read (or write) things that aren't in it, and if it's smaller than size the ARC will shrink. The ARC's actual size can shrink for other reasons, but the target size shrinking is a slower and more involved thing to recover from.

In OpenZFS 2.1 and before, there is a second target size statistic, 'arc_p' or 'p' (in arcstats); this is apparently short for 'partition', and is the target size for the Most Recently Used (MRU) portion of the ARC. The target size for the MFU portion is 'c - p' and isn't explicitly put into kstats. How 'c' (and 'p') get changed is a complicated topic that is going in another entry.

(In the current development version of OpenZFS, there's a new and different approach to MFU/MRU balancing (via); this will likely be in OpenZFS 2.2, whenever that is released, and may appear in a system near you before then, depending. The new system is apparently better, but its kstats are more opaque.)

Appendix: The short form version

size Current ARC size in bytes. It is composed of
data_size + metadata_size + bonus_size + dnode_size + dbuf_size + hdr_size + l2_hdr_size + abd_chunk_waste_size
arc_meta_used All of size other than data_size + abd_chunk_waste_size; 'metadata' in a broad sense, as opposed to the narrow sense of metadata_size.
mru_size Size of the MRU portion of the ARC
mfu_size Size of the MFU portion of the ARC
arc_meta_limit Theoretical limit on arc_meta_used
arc_dnode_limit Theoretical limit on dnode_size
c aka arc_c The target for size
p aka arc_p The target for mru_size
c - p The target for mfu_size

I believe that generally the following holds:

compressed_size + overhead_size = data_size + metadata_size

In OpenZFS 2.1 and earlier, there is no explicit target for MRU data as separate from MRU metadata. In OpenZFS 2.2, there will be.

ZFSARCItsVariousSizes written at 23:37:55; Add Comment

2023-03-28

An interesting yet ordinary consequence of ZFS using the ZIL

On the Fediverse, Alan Coopersmith recently shared this:

@bsmaalders @cks writing a temp file and renaming it also avoids the failure-to-truncate issues found in screenshot cropping tools recently (#aCropalypse), but as some folks at work recently discovered, you need to be sure to fsync() before the rename, or a failure at the wrong moment can leave you with a zero-length file instead of the old one as the directory metadata can get written before the file contents data on ZFS.

On the one hand, this is perfectly ordinary behavior for a modern filesystem; often renames are synchronous and durable, but if you create a file, write it, and then rename it to something else, you haven't insured that the data you wrote is on disk, just that the renaming is. On the other hand, as someone who's somewhat immersed in ZFS this initially felt surprising to me, because ZFS is one of the rare filesystems that enforces a strict temporal order on all IO operations in its core IO model of ZFS transaction groups.

How this works is that everything that happens in a ZFS filesystem goes into a transaction group (TXG). At any give time there's only one open TXG and TXGs commit in order, so if B is issued after A, either it's in the same TXG as A the two happen together or it's in a TXG after A and so A has already happened. In transaction groups, you can never have B happen but A not happen. In the TXG mental model of ZFS IO, this data loss is impossible, since the rename happened after the data write.

However, all of this strict TXG ordering goes out the window once you introduce the ZFS Intent Log (ZIL), because the ZIL's entire purpose is to persist selected operations to disk before they're committed as part of a transaction group. Renames and file creations always go in the ZIL (along with various other metadata operations), but file data only goes in the ZIL if you fsync() it (this is a slight simplification, and file data isn't necessarily directly in the ZIL).

So once the ZIL was in my mental model I could understand what had happened. In effect the presence of the ZIL had changed ZFS from a filesystem with very strong data ordering properties to one with more ordinary ones, and in such a more ordinary filesystem you do need to fsync() your newly written file data to make it durable.

(And under normal circumstances ZFS always has the ZIL, so I was engaging in a bit of skewed system programmer thinking.)

ZFSNaturalZILConsequence written at 22:48:43; Add Comment

2022-11-13

I wouldn't use ZFS for swap (either for swapfiles or with a zvol)

As part of broadly charting how Linux finds where to write and read swap blocks, I recently noted that ZFS on Linux can't be used to hold a swapfile. David Magda noted that you could get around this by creating a zvol and using it for swap. While the Linux kernel will accept this and it works, at least to some extent, I wouldn't rely on it and I wouldn't do it unless I was desperate and had no other choice. Fundamentally, swapping to ZFS is not in accordance with what people (and often Unix kernels) expect from writing pages out to swap.

(On Linux, the Arch wiki has some definite cautions.)

Both the Unix kernel and Unix system administrators expect swapping pages out (and reading them back in) to be low overhead operations, things that are very close to writing a block of memory to some disk blocks or reading disk blocks into (preallocated) memory. This is not how ZFS works, even for writes to zvols. Due to ZFS's fundamental decision to never overwrite data in place, writing out blocks to a zvol requires allocating new space for them in the ZFS pool, collecting all of the relevant changes together, and then writing out a transaction group (or perhaps writing the blocks to a ZFS Intent Log). This more or less intrinsically requires allocating memory for various internal ZFS book-keeping, as well as obtaining various ZFS-related locks (for example, ones that protect the data structures that track free blocks). And it winds up doing a lot more IO than just the direct pages of memory being written to swap.

A lot of the time this will all work. Often you aren't swapping under heavy memory pressure, you're just pushing some unused pages out to swap and it's okay for this to take a while and allocate some extra memory and so on. But, regardless of whether it works, it's much more complicated than swapping is normally supposed to be and that makes it more chancy and less predictable. All of this leaves me feeling that swap to ZFS (and to anything similar to it) is for very unusual situations, not normal operation. If I didn't expect to really ever need swap on a system, I think I'd rather have no swap rather than swap on a zvol.

(ZFS isn't the only swap environment that has this problem. Swapping to a file on NFS has many of the same issues, and is also something that I can't recommend.)

ZFS is good for many things, but not everything, and one of the things it's not good at is very low overhead direct write IO. This is by design, since you can't combine it with copy on write and ZFS decided the latter was more important (and I agree with it).

ZFSForSwapMyViews written at 21:52:35; Add Comment

2022-10-31

I wish ZFS supported per-user reservations, not just per-user quotas

ZFS supports a variety of ways to control space usage in filesystems. You can set a quota or a reservation on a filesystem, and you can set disk space and object count quotas on users, groups, and 'projects' in a filesystem. However, if you look at this list you'll notice an omission; you can't set a reservation for users, groups, or projects in a filesystem. There are some situations (at least in our world) where this would be convenient to have.

The most common case that comes up is that we have a bunch of people in a single filesystem, some of whom may fill up the filesystem by accident in the course of their work and others (such as professors) who we always want to be able to use some additional space so they can keep working. This is the ideal situation for a positive reservation instead of a negative quota, since what we want to put a limit on is the pool of space used by a group of people.

(The real ZFS answer is to put people who need reservations in their own filesystems because filesystems are cheap. But moving people from one filesystem to another is often rather disruptive and not trivial to coordinate, so often it doesn't get seriously contemplated until actual problems happen.)

OpenZFS has supported 'project quotas' since version 0.8.0, as covered in zfs-project(8) and zfs-projectspace(8). Project quotas can be used to give a single person (or group of people) a reservation in a filesystem, by putting their directories into a new project and then putting a project quota limit on the default project. However, you can't use this to give two people each a reservation of their own without putting quotas on each of them too, which is potentially (very) undesirable.

(ZFS project quotas appear to be in the current version of Illumos but I'm not sure when they appeared. It may have been added to the tree in August of 2019, per issue #11479.)

I don't have any personal experience with project quotas. Our Ubuntu ZFS fileservers are still running Ubuntu 18.04, which is too old to support them, and even once we upgrade to 22.04 we probably won't try it because of the various challenges of administering and managing them.

PS: Since ZFS supports project quotas, it also supports tracking space usage by 'project'. Here 'project' is basically 'whatever you want to tag with some unique identifier', which means that you could go through and tag every top level directory in a filesystem with a separate project ID so you could easily get reports on how much space is in use in each of them. Ordinary people probably just use 'du -hs'.

PPS: I think it would be reasonable to require the filesystem to have a reservation that was at least as big as the sum of all of the user reservations in it (or the user, group, and project ones if you wanted to support all of those).

ZFSPerUserReservationWish written at 19:14:10; Add Comment

2022-09-20

Why the ZFS ZIL's "in-place" direct writes of large data are safe

I recently read ZFS sync/async + ZIL/SLOG, explained (via), which reminded me that there's a clever but unsafe seeming thing that ZFS does here, that's actually safe because of how ZFS works. Today, I'm going to talk about why ZFS's "in-place" direct writes to main storage for large synchronous writes are safe, despite that perhaps sounding dangerous.

ZFS periodically flushes writes to disk as part of a ZFS transaction group; these days a transaction group commit happens every five seconds by default. However, sometimes programs want data to be sent to disk sooner than that (for example, your editor saving a file; it will sync the file in one way or another at the end, so that you don't lose it if there's a system crash immediately afterward). To do this, ZFS has the ZFS Intent Log (ZIL), which is a log of all write operations since the last transaction group where ZFS promised programs that the writes were durably on disk (to simplify a bit). If the system crashes before the writes can be sent to disk normally as part of a transaction group, ZFS can replay the ZIL to recreate them.

Taken by itself, this means that ZFS does synchronous writes twice, once to the ZIL as part of making them durable and then a second time as part of a regular transaction group. As an optimization, under the right circumstances (which are complicated, especially with a separate log device) ZFS will send those synchronous writes directly to their final destination in your ZFS pool, instead of to the ZIL, and then simply record a pointer to the destination in the ZIL. This sounds dangerous, since you're writing data directly into the filesystem (well, the pool) instead of into a separate log, and in a different filesystem it might be. What makes it safe in ZFS is that in ZFS, all writes go to unused (free) disk space because ZFS is what we generally call a copy-on-write system. Even if you're rewriting bits of an existing file, ZFS writes the new data to free space, not over the existing file contents (and it does this whether or not you're doing a synchronous write).

(ZFS does have to update some metadata in place, but it's a small amount of metadata and it's carefully ordered to make transaction group commits atomic. When doing these direct writes, ZFS also flushes your data to disk before it writes the ZIL that points to your data.)

Obviously, ZFS makes no assumptions about the contents of free disk space. This means that if your system crashes after ZFS has written your synchronous data into its final destination in what was free space until ZFS used it just now, but before it writes out a ZIL entry for it (and tells your editor or database that the data is safely on disk), no harm is done. No live data has been overwritten, and the change to what's in free space is unimportant (well, to ZFS, you may care a lot about the contents of the file that you were just a little bit late to save as power died).

Similarly, if your system crashes after the ZIL is written out but before the regular transaction group commits, the space your new data is written to is still marked as free at the regular ZFS level but the ZIL knows better. When the ZIL is replayed to apply all of the changes it records, your new data will be correctly connected to the overall ZFS pool (meta)data structures, the space will be marked as used, and so on.

(I've mentioned this area in the past when I wrote about the ZIL's optimizations for data writes, but at the time I explained its safety more concisely and somewhat in passing. And the ZFS settings and specific behavior I mentioned in that entry may now be out of date, since it's from almost a decade ago.)

ZFSZILSafeDirectWrites written at 22:27:38; Add Comment

(Previous 10 or go back to August 2022 at 2022/08/31)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.