The drawback of setting an explicit mount point for ZFS filesystems
ZFS has three ways of getting filesystems mounted and deciding where
they go in the filesystem hierarchy. As covered in the zfs manpage,
you have a choice of automatically putting the filesystem below the
pool (so that tank/example is mounted as /tank/example), setting
an explicit mount point with mountpoint=/some/where, or marking the
filesystem as 'legacy' so that you mount it yourself through whatever
means you want (usually /etc/vfstab, the legacy approach to filesystem
mounts). With either of the first two options, ZFS will automatically
mount and unmount filesystems as you import and export pools or do
various other things (and will also automatically share them over NFS if
set to do so); with the third, you're on your own to manage things.
The first approach is ZFS's default scheme and what many people
follow. However, for what is in large part historical reasons we
haven't used it; instead we've explicitly specified our mount points
with mountpoint=/some/where on our fileservers.
When I set up ZFS on Linux on my office
workstation I also set the mount points explicitly, because I was
migrating existing filesystems into ZFS and I didn't feel like
trying to change their mount points (or add another layer of bind
mounts).
For both our fileservers and my workstation, this has turned out to
sometimes be awkward. The largest problem comes if you're in the
process of moving a filesystem from one pool to another on the same
server using zfs send and zfs recv. If mountpoint was unset,
both versions of the filesystem could coexist, with one as
/oldpool/fsys and the other as /newpool/fsys. But with mountpoint
set, they both want to be mounted on the same spot and only one can
win. This means we have to be careful to use 'zfs recv -u' and even
then we have to worry a bit about reboots.
(You can set 'canmount=off' or clear the 'mountpoint' property
on the new-pool version of the filesystem for the time when the
filesystem is only part-moved, but then you have a divergence between
your received snapshot and the current state of the filesystem and
you'll have to force further incremental receives with 'zfs recv
-F. This is less than ideal, although such a divergence can happen
anyways for other reasons.)
On the other hand, there are definite advantages to not having the mount point change and for having mount points be independent of the pool the filesystem is in. There's no particular reason that either users or your backup system need to care which pool a particular filesystem is in (such as whether it's in a HD-based pool or a SSD-based one, or a mirrored pool instead of a slower but more space efficient RAIDZ one); in this world, the filesystem name is basically an abstract identifier, instead of the 'physical location' that normal ZFS provides.
(ZFS does not quite do 'physical location' as such, but the pool plus the position within the pool's filesystem hierarchy may determine a lot about stuff like what storage the data is on and what quotas are enforced. I call this the physical location for lack of a better phrase, because users usually don't care about these details or at least how they're implemented.)
On the third hand, arguably the right way to provide an 'abstract identifier' version of filesystems (if you need it) is to build another layer on top of ZFS. On Solaris, you'd probably do this through the automounter with some tool to automatically generate the mappings between logical filesystem identifiers and their current physical locations.
PS: some versions of 'zfs receive' allow you to set properties
on the received filesystem; unfortunately, neither OmniOS nor ZFS
on Linux currently support that. I also suspect that doing this
creates the same divergence between received snapshot and received
filesystem that setting the properties by hand does, and you're
back to forcing incremental receives with 'zfs recv -F' (and
re-setting the properties and so on).
(It's sort of a pity that canmount is not inherited, because
otherwise you could receive filesystems into a special 'newpool/nomount'
hierarchy that blocked mounts and then active them later by using
'zfs rename' to move them out to their final place. But alas,
no.)
Comments on this page:
|
|