How and why we sell storage to people here
As a university department with a centralized fileserver environment plus a wide variety of professors and research groups, we have a space allocation problem. Namely, we need some way to answer the question of who gets how much space, especially in the face of uneven grant funding levels. Our historical and current answer is that we allocate space by selling it to people for a fixed one-time cost (for various reasons we can't give people free space). People can have as much space as they're willing to pay for; if they want so much space that we run out of currently available but not allocated space, we'll buy more hardware to meet the demand (and we'll be very happy about it, because we've historically had plenty of unsold space).
In the very old days that were mostly before my time here, our fileserver environment used Solaris DiskSuite on fixed-size partitions carved out from 'hardware RAID' FibreChannel controllers in a SAN setup. In this environment, one partition was one filesystem, and that was the unit we sold; if you wanted more storage space, my memory is that you had to put it in another filesystem whether you liked that or not, and obviously this meant that you had to effectively pre-allocate your space among your filesystems.
Our first generation ZFS fileserver environment followed this basic pattern but with some ZFS flexibility added on top. Our iSCSI backends exported standard-sized partitions as individual LUNs, which we called chunks, and some number of mirrored pairs of chunks were put together as a ZFS pool that belonged to one professor or group (which led to us needing many ZFS pools). We had to split up disks into multiple chunks partly because not doing so would have been far too wasteful; we started out with 750 GB Seagate disks and many professors or groups had bought less total space than that. We also wanted people to be able to buy more space without facing a very large bill, which meant that the chunk size had to be relatively modest (since we only sold whole chunks). We carried this basic chunk based space allocation model forward into our second generation of ZFS fileservers, which was part of why we had to do a major storage migration for this shift.
Then, well, we changed our minds, where I actually mean that our director worked out how to do things better. Rather than forcing people to buy an entire chunk's worth of space at once, we've moved to simply selling them space in 1 GB units; professors can buy 10 GB, 100 GB, 300 GB, 1000 GB, or whatever they need or want. ZFS pools are still put together from standard-sized chunks of storage, but that's now an implementation detail that only we care about; when you buy some amount of space, we make sure your pool had enough chunks to cover that space. We use ZFS quotas (on the root of each pool) to limit how much space in the pool can actually be used, which was actually something we'd done from the very beginning (our ZFS pool chunk size was much larger than our on FC SAN standard partition size, so some people got limited in the conversion).
This shift to selling in 1 GB units is now a few years old and has proven reasonably popular; we've had a decent number of people buy both small and large amounts of space, certainly more than were buying chunks before (possibly because the decisions are easier). I suspect that it's also easier to explain to people, and certainly it's clear what a professor gets for their money. My guess is that being able to buy very small amounts of space (eg 50 GB) to meet some immediate and clear need also helps.
(Professors and research groups that have special needs and their own grant funding can buy their own hardware and have their Point of Contact run it for them in their sandbox. There have been a fairly wide variety of such fileservers over the years.)
PS: There are some obvious issues with our general approach, but there are also equal or worse issues with other alternate approaches in our environment.