2007-09-25
How to clear Solaris Volume Manager metadb replicas on Solaris 10 x86
It is possible to get a DiskSuite metadb replica into a sufficiently damaged state that it will panic the system in early boot, which is especially irritating when you aren't using it for anything. This can lead to you needing to clear metadb replicas.
(If you get past boot by turning off svc:/system/metainit the
system panics when you run metadb or metainit, which is not
too helpful for actually dealing with the problem.)
If you don't care about leaving the actual bits intact for potential
analysis, I believe you can just dd /dev/zero over the appropriate
slice. (Do not do this, however, if you have been tempted into using
that conveniently spare slice 8 as a metadb replica.)
The less brutal way out is to boot into the rescue environment, edit
your /kernel/drv/md.conf and /etc/lvm/mddb.cf to remove the
slice (you must edit both), rebuild the boot archive with bootadm
update-archive -R /a, and reboot.
(If you are masochistic you can go through the dance necessary to turn off the metainit service, bring the system up in single user, do this, and then turn metainit back on. But the rescue environment way is simpler.)
Disclaimer: recovering from dropping below metadb replica quorum is beyond the scope of this entry. Besides, I haven't had to do it yet.
2007-09-05
Features that I wish ZFS had
This is not counting features that Sun has already said they are going to put in someday, like the ability to remove vdevs from a ZFS pool. The motivation for many of these wishes is good long term storage management, something that ZFS is currently weak at.
- the ability to migrate a filesystem from storage pool to storage
pool (on the same system) without user-visible downtime or lockups.
In theory this ought to be doable, since ZFS is abstracting everything
anyways.
- the ability to control and change what vdevs a filesystem will use
(or occupy), or at least what sort of vdevs they will or won't
use. This would make it easier to have filesystems with different
levels of reliability needs in the same general storage pool,
especially when those needs change.
(To a certain extent this isn't needed if there is transparent storage pool to storage pool migration of filesystems.)
- the ability to turn an existing directory into a sub-filesystem,
or an existing sub-filesystem back into an ordinary directory,
without losing the contents in either case.
An example may help illustrate why I want this. The natural grouping of people's home directories around here is by group; everyone in one group gets clumped into one top-level directory. However, every so often a professor will want to do something like NFS-export their home directory to their personal workstation.
The theoretical ZFS answer is to make everyone's home directory into a ZFS filesystem right up front. However, this leads to a profusion of NFS mounts; it would be nicer if we could defer turning someone's home directory into a filesystem until we really needed to.
- an option so that if you NFS mount a given ZFS filesystem, you
automatically get all of the sub-filesystems that you have NFS
access permission for without having to mount them explicitly.
The problem with using ZFS the way that Sun wants you to in an NFS world is that you wind up with thousands of NFS mounts in even a modest environment. But this is hard to manage, especially when you create and delete filesystem-worthy entities all the time. Many of these entities will have the same NFS export permissions; they are being created for other things, like quota control or separate snapshots or so on. It would be nice to be able to treat them as a unit, so you could mount the top level filesystem and didn't have to care about the details of all of the sub-filesystems.
(For example, we should really have one ZFS filesystem per user home directory, and another ZFS filesystem for their
public_htmlsubdirectory, so that we can export it to the web server but deny the web server general home directory permissions, and things proliferate from there.)
Disclaimer: to the best of my knowledge, ZFS doesn't have and isn't planned to have these features; however, I would be happy to be wrong.