== One reason why we have to do a major storage migration We're currently in the process of migrating from our old fileservers to [[our new fileservers ../solaris/ZFSFileserverSetupII]]. We're doing this by manually copying data around instead of anything less user visible and sysadmin intensive. You might wonder why, since the old environment and the new architecture are the same (with ZFS, iSCSI backends, mirroring, and so on). One reason is that [[the ZFS 4K sector issue ../solaris/ZFS4KSectorDisks]] has forced our hands. But even if that wasn't there it turns out that we probably would have had to do a big user visible migration because of a collision of technology and politics (and ZFS limitations). The simple version of the organizational politics are that ~~we can't just give people free space~~. Most storage migrations will increase the size of disks involved, and ours was no exception; we're going from a mixture of 750 GB and 1 TB drives to 2 TB drives. If you just swap drives out for bigger ones, pretty much any decent RAID or filesystem can expand into the newly available space without any problems at all; ZFS is no exception here. If you went from 1 TB to 2 TB disks in a straightforward way, all of your ZFS pools would or could immediately double in size. (Even if you can still get your original sized disks during an equipment turnover, you generally want to move up in size so that you can reduce the overhead per GB or TB. Plus, smaller drives may actually have worse cost per GB than larger ones.) Except that we can't give space away for free, so if we did this we would have had to persuade people to buy all this extra space. There was very little evidence that we could do this, since people weren't exactly breaking down our doors to buy more space as it stood. If you can't expand disks in place and give away (or sell) the space, what you want (and need) to do is to reshape your existing storage blobs. If you were using N disks (or mirror pairs) for a storage blob before the disk change, you want to move to using N/2 disks afterwards; you get the same space but on fewer disks. Unfortunately ZFS simply doesn't support this sort of reshaping of pool storage with existing pools; once a pool has so many disks (or mirrors), you're stuck with that many. This left us with having to do it by hand, in other words a manual migration from old pools with N disks to new pools with N/2 disks. (If you've decided to go from smaller disks to larger disks you've already implicitly decided to sacrifice overall IOPs per TB in the name of storage efficiency.) === Sidebar: Some hacky workarounds and their downsides The first workaround would be to double the raw underlying space but then somehow limit people's ability to use it (ZFS can do this via quotas, for example). The problem is that this will generally leave you with a bit less than half of your space allocated but unused on a long term basis unless people wake up and suddenly start buying space. It's also awkward if the wrong people wake up and want a bunch more space, since your initial big users (who may not need all that much more space) are holding down most of the new space. The second workaround is to break your new disks up into multiple chunks, where one chunk is roughly the size of your old disks. This has various potential drawbacks but in our case it would simply have put too many individual chunks on one disk as our old 1 TB disks already had four (old) chunks, which was about as many ways as we wanted to split up one disk.