What I know about boot time ZFS pool activation (part I)
In response to my entry on the boot time ZFS and iSCSI sequencing bug, a commentator asked if SMF dependencies could be used to work around the issue. As it happens, this is not a simple question to answer because how ZFS pools are activated at boot time is at best an obscure thing (at least as far as I can tell). Here's what I think is going on, which has to come with a lot of disclaimers.
ZFS pool information for pools that will be imported during boot
/etc/zfs/zpool.cache; this is a serialized nvlist of pool information. zpool.cache is read in by
the kernel very early during boot; as far as I can disentangle the
OpenSolaris code, it's loaded when the ZFS module is first loaded (or
as the root filesystem is being brought up, if the root filesystem is a
ZFS one). However this doesn't seem to actually activate the ZFS pools,
just set up the (potential) pool configuration in the kernel.
(ZFS pool activation is, or at least seems to be, when the kernel tries to find all of the pool's devices and either finds enough of them to start the pool up or marks it as failed. Thus ZFS pool activation is the point at which all devices need to have been brought up.)
It's not clear to me when and how ZFS pools are actually activated. At
a low level pools seem to be activated on demand when they are
looked at. However there is no high level SMF service that says
'activate ZFS pools'; instead, they seem to get activated as a side
effect of other SMF services. I suspect that the primary path to
ZFS pool activation is the '
zfs mount -a' that is done in the SMF
svc:/system/filesystem/local service (this is what is prints the
Reading ZFS config:' message that you see during Solaris boot).
There is also some special magic for activating ZFS swap volumes
(exactly where the magic is depends on which Solaris 10 update you're
on), which may activate pools that have swap volumes.
How iSCSI comes into this picture is sufficiently complicated that it needs another entry.