== A bit more on the ZFS delete queue and snapshots In [[my entry on ZFS delete queues ZFSDeleteQueue]], I mentioned that a filesystem's delete queue is captured in snapshots and so the space used by pending deletes is held by snapshots. A commentator then asked: > So in case someone uses zfs send/receive for backup he accidentially > stores items in the delete queue? This is important enough to say explicitly: ~~YES~~. Absolutely. Since it's part of a snapshot, the delete queue and all of the space it holds will be transferred if you use _zfs send_ to move a filesystem snapshot elsewhere for whatever reason. Full backups, incremental backups, migrating a filesystem, they all copy all of the space held by the delete queue (and then keep it allocated on the received side). This has two important consequences. The first is that if you transfer a filesystem with a heavy space loss due to [[things being held in the delete queue for whatever reason ZFSDeleteQueueNLMLeak]], you can get a very head-scratching result. If you don't actually mount the received dataset you'll wind up with a dataset that claims to have all of its space consumed by the dataset, not snapshots, but if you '_zfs destroy_' the transfer snapshot the dataset promptly shrinks. Having gone through this experience myself, this is a very WAT moment. The second important consequence is that apparently the moment you mount the received dataset, the current live version will immediately diverge from the snapshot (because ZFS wakes up, says 'ah, a delete queue with no live references', and applies all of those pending deletes). This is a problem if you're doing repeated incremental receives, because the next incremental receive will tell you 'filesystem has diverged from snapshot, you'll have to tell me to force a rollback'. On the other hand, if ZFS space accounting is working right this divergence should transfer a bunch of the space the filesystem is consuming into the _usedbysnapshots_ category. Still, this must be another head-scratching moment, as just mounting a filesystem suddenly caused a (potentially big) swing in space usage and a divergence from the snapshot. (I have not verified this mounting behavior myself, but in retrospect it may be the cause of some unexpected divergences we've experienced while migrating filesystems. Our approach was always just to use '_zfs recv -F ..._', which is prefectly viable if you're really sure that you're not blowing your own foot off.)