Wandering Thoughts archives

2020-03-23

How we set up our ZFS filesystem hierarchy in our ZFS pools

Our long standing practice here, predating even the first generation of our ZFS fileservers, is that we have two main sorts of filesystems, home directories (homedir filesystems) and what we call 'work directory' (workdir) filesystems. Homedir filesystems are called /h/NNN (for some NNN) and workdir filesystems are called /w/NNN; the NNN is unique across all of the different sorts of filesystems. Users are encouraged to put as much stuff as possible in workdirs and can have as many of them as they want, which mattered a lot more in the days when we used Solaris DiskSuite and had fixed-sized filesystems.

(This creates filesystems called things like /h/281 and /w/24.)

When we moved from DiskSuite to ZFS, we made the obvious decision to keep these user-visible filesystem names and the not entirely obvious decision that these filesystem names should work even on the fileservers themselves. This meant using the ZFS mountpoint property to set the mount point of all ZFS homedir and workdir filesystems, which works (and worked fine). However, this raised another question, that of what the actual filesystem name inside the ZFS pool should look like (since it no longer has to reflect the mount point).

There are a number of plausible answers here. For example, because our 'NNN' numbers are unique, we could have made all filesystems be simply '<pool>/NNN'. However, for various reasons we decided that the ZFS pool filesystem should reflect the full name of the filesystem, so /h/281 is '<pool>/h/281' instead of '<pool>/281' (among other things, we felt that this was easier to manage and work with). This created the next problem, which is that if you have a ZFS filesystem of <pool>/h/281, <pool>/h has to exist in some form. I suppose that we could have made these just be subdirectories in the root of the pool, but instead we decided to make them be empty and unmounted ZFS filesystems that are used only as containers:

zfs create -o mountpoint=none fs11-demo-01/h
zfs create -o mountpoint=none fs11-demo-01/w

We create these in every pool as part of our pool setup automation, and then we can make, for example, fs11-demo-01/h/281, which will be mounted everywhere as /h/281.

(Making these be real ZFS filesystems means that they can have properties that will be inherited by their children; this theoretically enables us to apply some ZFS properties only to a pool's homedir or workdir filesystems. Probably the only useful one here is quotas.)

solaris/ZFSOurContainerFilesystems written at 23:47:32; Add Comment

Why we use 1U servers, and the two sides of them

Every so often I talk about '1U servers' and sort of assume that people know both what '1U' means here and what sort of server I mean by this. The latter is somewhat of a leap, since there are two sorts of server that 1U servers can be, and the former requires some hardware knowledge that may be getting less and less common in this age of the cloud.

In this context, the 'U' in 1U (or 2U, 3U, 4U, 5U, and so on) stands for a rack unit, a measure of server height in a standard server rack. Because racks have a standard width and a standard maximum depth, height is the only important variation in size for in rack mounted servers. A 1U server is thus the smallest practical standalone server that you can get.

(Some 1U servers are shorter than others, and sometimes these short servers cause problems with physical access. They don't really save you any space because you generally can't put things behind them.)

In practice, there are two sorts of 1U servers, each with a separate audience. The first sort of 1U server is for people who have a limited amount of rack space and so want to pack as much computing into it as they can; these are high powered servers, densely packed with CPUs, memory, and storage, and are correspondingly expensive. The second sort of 1U server is for people who have a limited amount of money and want to get as many physical servers for it as possible; these servers have relatively sparse features and are generally not powerful, but they are the most inexpensive decently made rack mount servers you can buy.

(I believe that the cheapest servers are 1U because that minimizes the amount of sheet metal and so on involved. The motherboard, RAM, and a few 3.5" HDs can easily fit in the 1U height, and apparently it's not a problem for the power supply either. CPUs tend to be cooled using heatsinks with forced fan airflow over them, and often not very power hungry to start with. You generally get space for one or two PCIe cards mounted sideways on special risers, which is important if you want to add, say, 10G-T networking to your inexpensive 1U servers.)

We aren't rack space constrained, so our 1U servers are the inexpensive sort. We've had various generations of these servers, mostly from Dell; our 'current' generation are Dell R230s. That we buy 1U servers on price, to be inexpensive, is part of why our servers aren't as remote operation resilient as I'd now like.

(We have a few 1U servers that are more the 'dense and powerful' style than the 'inexpensive' style; they were generally bought for special purposes. I believe that some of them are from Supermicro.)

sysadmin/WhyWeUse1UServers written at 00:10:33; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.