Why I'm interested in converting my ext3 filesystems to ext4
My home machine has a sufficiently old set of filesystems that many of my actively used filesystems are still ext3, not ext4, including both my home directory and where I keep code. Normally this isn't something that I particularly think or worry about; it's not like ext4 is a particularly radical advance from ext3 (certainly not the same sort of jump that was ext2 to ext3, where you got fast crash recovery). As a sysadmin I'm generally cautious with filesystem choices anyways (at least when I'm not being radical); I used ext2 over ext3 for years after the latter came out, for example, on the principle that I'd let other people find the problems.
It turns out that there is one important thing that ext4 has and ext3 does not: ext4 has sub-second file timestamps, while ext3 only does timestamps to the nearest second. Modern machines are fast enough that nearest second timestamps are increasingly not really good enough when building software or otherwise doing things that care about relative file timestamps and 'is X more recent than Y'. Oh, sure, it works most of the time, but every so often things go wrong or you find assumptions buried in other people's software.
Most people don't notice these things because most people are now using filesystems that support sub-second file timestamps (which is almost all modern Linux filesystems). What this tells me is that I'm increasingly operating in an unusual and effectively unsupported environment by continuing to use ext3. As time goes by, more and more software is likely to assume sub-second file timestamps basically by default (because the authors have never run it on a system without them) and not work quite right in various ways. I can fight a slow battle against what is effectively a new standard of sub-second file timestamps, or I can give in and convert my ext3 filesystems to ext4. It's not like ext4 is exactly a new filesystem these days, after all (Wikipedia dates it to 2008).
The mechanics of this conversion raise a few issues, but that's something for another entry.
A bit more on the ZFS delete queue and snapshots
In my entry on ZFS delete queues, I mentioned that a filesystem's delete queue is captured in snapshots and so the space used by pending deletes is held by snapshots. A commentator then asked:
So in case someone uses zfs send/receive for backup he accidentially stores items in the delete queue?
This is important enough to say explicitly: YES. Absolutely.
Since it's part of a snapshot, the delete queue and all of the space
it holds will be transferred if you use
zfs send to move a
filesystem snapshot elsewhere for whatever reason. Full backups,
incremental backups, migrating a filesystem, they all copy all of
the space held by the delete queue (and then keep it allocated on
the received side).
This has two important consequences. The first is that if you
transfer a filesystem with a heavy space loss due to things being
held in the delete queue for whatever reason,
you can get a very head-scratching result. If you don't actually
mount the received dataset you'll wind up with a dataset that claims
to have all of its space consumed by the dataset, not snapshots,
but if you '
zfs destroy' the transfer snapshot the dataset promptly
shrinks. Having gone through this experience myself, this is a very
The second important consequence is that apparently the moment you
mount the received dataset, the current live version will immediately
diverge from the snapshot (because ZFS wakes up, says 'ah, a delete
queue with no live references', and applies all of those pending
deletes). This is a problem if you're doing repeated incremental
receives, because the next incremental receive will tell you
'filesystem has diverged from snapshot, you'll have to tell me to
force a rollback'. On the other hand, if ZFS space accounting is
working right this divergence should transfer a bunch of the space
the filesystem is consuming into the
Still, this must be another head-scratching moment, as just mounting
a filesystem suddenly caused a (potentially big) swing in space
usage and a divergence from the snapshot.
(I have not verified this mounting behavior myself, but in retrospect
it may be the cause of some unexpected divergences we've experienced
while migrating filesystems. Our approach was always just to use
zfs recv -F ...', which is prefectly viable if you're really
sure that you're not blowing your own foot off.)