== The drawback of setting an explicit mount point for ZFS filesystems ZFS has three ways of getting filesystems mounted and deciding where they go in the filesystem hierarchy. As covered in the _zfs_ manpage, you have a choice of automatically putting the filesystem below the pool (so that _tank/example_ is mounted as _/tank/example_), setting an explicit mount point with _mountpoint=/some/where_, or marking the filesystem as 'legacy' so that you mount it yourself through whatever means you want (usually _/etc/vfstab_, the legacy approach to filesystem mounts). With either of the first two options, ZFS will automatically mount and unmount filesystems as you import and export pools or do various other things (and will also automatically share them over NFS if set to do so); with the third, you're on your own to manage things. The first approach is ZFS's default scheme and what many people follow. However, for what is in large part historical reasons we haven't used it; instead we've explicitly specified our mount points with _mountpoint=/some/where_ on [[our fileservers ZFSFileserverSetupII]]. When I set up [[ZFS on Linux http://zfsonlinux.org/]] on my office workstation I also set the mount points explicitly, because I was migrating existing filesystems into ZFS and I didn't feel like trying to change their mount points (or add another layer of bind mounts). For both our fileservers and my workstation, this has turned out to sometimes be awkward. The largest problem comes if you're in the process of moving a filesystem from one pool to another on the same server using _zfs send_ and _zfs recv_. If _mountpoint_ was unset, both versions of the filesystem could coexist, with one as _/oldpool/fsys_ and the other as _/newpool/fsys_. But with _mountpoint_ set, they both want to be mounted on the same spot and only one can win. This means we have to be careful to use '_zfs recv -u_' and even then we have to worry a bit about reboots. (You can set '_canmount=off_' or clear the '_mountpoint_' property on the new-pool version of the filesystem for the time when the filesystem is only part-moved, but then you have a divergence between your received snapshot and the current state of the filesystem and you'll have to force further incremental receives with '_zfs recv -F_. This is less than ideal, although such a divergence can happen anyways for [[other reasons ZFSDeleteQueue]].) On the other hand, there are definite advantages to not having the mount point change and for having mount points be independent of the pool the filesystem is in. There's no particular reason that either users or your backup system need to care which pool a particular filesystem is in (such as whether it's in a HD-based pool or a SSD-based one, or a mirrored pool instead of a slower but more space efficient RAIDZ one); in this world, the filesystem name is basically an abstract identifier, instead of the 'physical location' that normal ZFS provides. (ZFS does not quite do 'physical location' as such, but the pool plus the position within the pool's filesystem hierarchy may determine a lot about stuff like what storage the data is on and what quotas are enforced. I call this the physical location for lack of a better phrase, because users usually don't care about these details or at least how they're implemented.) On the third hand, arguably the right way to provide an 'abstract identifier' version of filesystems (if you need it) is to build another layer on top of ZFS. On Solaris, you'd probably do this through the automounter with some tool to automatically generate the mappings between logical filesystem identifiers and their current physical locations. PS: some versions of '_zfs receive_' allow you to set properties on the received filesystem; unfortunately, neither OmniOS nor ZFS on Linux currently support that. I also suspect that doing this creates the same divergence between received snapshot and received filesystem that setting the properties by hand does, and you're back to forcing incremental receives with '_zfs recv -F_' (and re-setting the properties and so on). (It's sort of a pity that _canmount_ is not inherited, because otherwise you could receive filesystems into a special '_newpool/nomount_' hierarchy that blocked mounts and then active them later by using '_zfs rename_' to move them out to their final place. But alas, no.)