Wandering Thoughts archives

2008-11-05

How many root passwords should you have?

There's a simple answer to the question of how many root passwords you should have; clearly, you should have a separate root password for each system. This answer is, shall we say, naive in most situations.

We can see why by asking the traditional security question of what the actual risks are of using the same root password on different systems, which is that an attacker who gets your root password from one system can then immediately compromise another. So the first situation where it is mostly or entirely pointless to have separate root passwords is where an attacker could compromise the other machine even without the root password.

The next situation is where blocking the attacker getting root on other machines isn't actually protecting anything meaningful, for example if you use ordinary NFS and the attacker gets root on a machine with enough NFS mount permissions. The attacker hardly needs to get root on any other machine, because they already have full access to user files that are visible from their machine, which in many cases is 'all of them'.

(Sure, NFS doesn't give them access as root, but this is hardly an obstacle; they can use root powers to become the user's UID and then go to town.)

I could go on, but there's a more general principle here: you don't want to think about machines, you want to think about security domains. There is very little point in using different root passwords on machines in the same security domain, and even if you have multiple security domains you may still want to use the same root password across them, because there are some risks to having lots of passwords.

(And you want to think realistically about what is and isn't in each of your security domains. You may conclude that things are intertwined enough that you only really have one security domain, although you could technically argue that you have several.)

sysadmin/HowManyRootPasswords written at 23:27:38; Add Comment

An issue with quotas on ZFS pools

For peculiar local reasons, we have some ZFS pools that have overall pool quotas (on Solaris 10 U5, so these are real full quotas). We just had the first such pool fill up and it turns out that when this happens, ZFS has a somewhat undesirable bit of behavior:

$ rm tankdata
rm: tankdata not removed: Disc quota exceeded

(You can't truncate anything either. Not even root can remove or truncate files.)

This does not happen if the pool has no quota and fills up, and it also does not happen if the quota is on anything but the pool itself. For example, you can put all of your filesystems under a 'quota' pseudo-filesystem and put what would otherwise be a pool quota on this 'quota' filesystem and everything works (users run out of space but can fix it themselves).

(Note that there are no snapshots involved here; neither the pool that this happened to nor the test pool that I used to explore what was going on had any snapshots at all.)

I assume that what is going on here is that ZFS is counting the very temporary extra space needed for the internal metadata necessary when you remove the file against the pool quota and since there is no space left in the pool quota, disallowing the action. This is consistent with its snapshot behavior, although even less useful.

I suspect (and hope) that this behavior will go away with Solaris 10 update 6's new 'refquota' ZFS feature, which makes this yet another reason to upgrade to Solaris 10 U6 as soon as we can (now that it's finally out).

(By the way, the way to fix a pool with this problem is of course to temporarily increase or remove the ZFS pool quota.)

solaris/ZFSPoolQuotaIssue written at 02:14:40; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.