The FTE pricing gamble (for vendors)
A popular move with a number of vendors that we've tried to deal with is to offer us site licenses based (only) on how many FTEs we have (for those that have been lucky enough not to encounter this term, it means how many 'full time equivalent' people you have). In the process of this I've come to feel that offering FTE-based pricing is a high stakes gamble for a vendor, one that doesn't necessarily work in their favour. One way to put it is that FTE-based pricing is an artificial attempt to make us use the product everywhere.
In an FTE pricing model the actual per-unit price of the product depends on how widely you use it. If you use it everywhere or nearly everywhere then its price can be quite low. If you use it in only a few places and only for a few people, its effective price is very high. Compounding this is that FTE based licensing is generally priced with the assumption (either implicit or explicit) that the product will be widely used.
(Another way to put this comes from Windows licensing; you pay a 'FTE based product tax' for every person whether or not they use the product. The vendor goal is what it was for Microsoft, to make any alternative the more expensive choice because you are already paying for the vendor's product.)
Some vendors come to us with FTE pricing when we already use their product widely or near universally. These vendors can get away with FTE pricing even if it doesn't save us money (and sometimes it makes internal political sense). But other vendors have come to us with FTE pricing when they are not so widely used (and the vendor should know this), or even worse used only a bit (or not yet used at all). These vendors are playing a very high stakes gamble: they are betting that they can force us to pay their price and as a result push their product throughout the organization. This can and does backfire and when it backfires, it often does so violently. For obvious reasons, this goes especially badly when a vendor is trying to change a much cheaper long-standing pricing model to an FTE model.
(You would think that vendors would avoid doing something like this, but apparently not. I've been a (distant) spectator to just such a backfire recently, which is why this issue is on my mind.)
By the way, this FTE pricing model is especially dangerous with a larger organization because the absolute dollars involved are much bigger. If the only organizational unit a vendor will license for is 'the entire University of Toronto', well, I believe we are at something over 10,000 FTEs. You can imagine what that does to prices, among other things.
Sidebar: the problem with expensive university-wide licenses here
In some universities, IT and the IT budget are centrally provided and centrally managed. That is not the case here; faculties and departments and groups fund their IT individually. This means that there is generally no one place to fund an expensive university-wide license in one decision; instead it must be funded by running around to lots of different people to get them to chip in. This takes a lot of time and work and injects a lot of uncertainty into the process, especially if it must be renewed on a year to year basis (since next year a particular department may not have the budget or may decide that they don't have that much need and so on and so forth).
ZFS filesystem compression and quotas
ZFS filesystem compression is widely seen as basically a universally good thing (unlike deduplication); turning it on almost always gives you a clear space gain for what is generally a minor cost. Unfortunately it turns out to have an odd drawback in our environment in how it interacts with ZFS's disk quotas. Put simply, ZFS disk quotas limit the physical space consumed by a filesystem, not the logical space. In other words they limit how much post-compression disk space a filesystem can use instead of the pre-compression space. This has two drawbacks.
The first drawback is simply the user experience. In some situations writing 10 GB to a filesystem with 10 GB of quota space left will fill it up; in other situations you'll be left with a somewhat unpredictable amount of space free afterwards. Similarly if you have 10 GB free and rewrite portions of an existing file (perhaps you have a database writing and rewriting records), your free space can go down. Or up. All of this can be explained but generally not predicted and I think it's going to be at least a bit surprising to people.
(Of course these user experience problems exist even without quotas, because your pool only has so much space and how that space gets used gets unpredictable.)
The more significant problem for us is that we primarily use quotas to limit how much data we have to back up for a single filesystem. Here the space usage we care about and want to limit is actually the raw, pre-compression space usage. We don't care how much space a filesystem takes on disk, we care how much space it will take on backups (and we generally don't want to compress our backups for various reasons). Quotas based on logical space consumed would be much more useful to us than the current ZFS quotas.
(Since we have to recreate all of our pools anyways I've been thinking about whether we want to change our standard pool and filesystem configurations. My tentative conclusion is that we don't want to turn compression on, largely because of the backup issue combined with it probably not saving people significant amounts of space.)