An issue with quotas on ZFS pools
For peculiar local reasons, we have some ZFS pools that have overall pool quotas (on Solaris 10 U5, so these are real full quotas). We just had the first such pool fill up and it turns out that when this happens, ZFS has a somewhat undesirable bit of behavior:
$ rm tankdata
rm: tankdata not removed: Disc quota exceeded
(You can't truncate anything either. Not even root can remove or truncate files.)
This does not happen if the pool has no quota and fills up, and it
also does not happen if the quota is on anything but the pool itself.
For example, you can put all of your filesystems under a '
pseudo-filesystem and put what would otherwise be a pool quota on this
quota' filesystem and everything works (users run out of space but
can fix it themselves).
(Note that there are no snapshots involved here; neither the pool that this happened to nor the test pool that I used to explore what was going on had any snapshots at all.)
I assume that what is going on here is that ZFS is counting the very temporary extra space needed for the internal metadata necessary when you remove the file against the pool quota and since there is no space left in the pool quota, disallowing the action. This is consistent with its snapshot behavior, although even less useful.
I suspect (and hope) that this behavior will go away with Solaris 10 update 6's new 'refquota' ZFS feature, which makes this yet another reason to upgrade to Solaris 10 U6 as soon as we can (now that it's finally out).
(By the way, the way to fix a pool with this problem is of course to temporarily increase or remove the ZFS pool quota.)