Why I want a solid ZFS implementation on Linux
The short version of this is 'ZFS checksums and ZFS scrubs'. Without strong per-block integrity protections, there are two issues that I increasingly worry about for my Linux workstations with mirrored disks: read errors on the remaining live disk when resynchronizing a RAID-1 mirror after it loses one disk and slow data loss due to undetected read errors and corrupted on-disk data. Slow data loss is also a worry for backups on a single backup or especially an archival disk (I'll have more than one archive disk but cross-verification may be very painful).
(ZFS also offers flexible space management for filesystems, but this is less of an issue for me. In practice the filesystems on my workstation just grow slowly over time, which is a scenario that's already handled by LVM. I might do some reorganization if I could shrink filesystems easily but probably not much.)
ZFS's block checksums combined with regular scrubs basically immunize me against these creeping problems. Unless I'm very unlucky I can pretty much count on any progress disk damage getting repaired, and if I'm unlucky at least I'll know about it and maybe I can retrieve things from backups. Of course in theory Btrfs can do all of this too, but btrfs remains not ready for production and unlike ZFS this applies to the fundamental code, not just the bits that connect the core ZFS code to Linux.
(That ZFS is not integrated into the mainline kernel also makes it somewhat risky to use ZFS on distributions like Fedora that stick closely to the current mainline kernels and update frequently. Btrfs is obviously much better off here, so I really wish it was stable and proven in widespread usage.)
I suppose the brute force overkill solution to this dilemma is an OmniOS based fileserver that NFS exports things to my Linux workstation, but there are various drawbacks to that (especially at home).
(Running my entire desktop environment on OmniOS is a complete non-starter.)
(This is sort of the background explanation behind a tweet.)
Why I'm not looking at installing OmniOS via Kayak
As far as I can see, OmniOS's Kayak network install system is the recommended way to both install a bunch of OmniOS machines and to customize the resulting installs (for example to change the default sizes of various things). However, even setting aside my usual issues with automatic installers (which can probably be worked around) I've found myself uninterested in trying to use Kayak. The core problem for me is that Kayak only seems to really be supported on an OmniOS host.
The blunt truth is that we're not going to use OmniOS for very much here. It's going to run our fileservers, but while those are very important machines there are only a handful of them. I don't want to have to set up and manage an additional OmniOS machine (and a bunch of one-off infrastructure on that machine) simply to install a handful of fileservers with custom parameters and some additional conveniences. The cognitive overhead is simply not worth it. Things would be different if I could confidently host a Kayak system on an Ubuntu machine, as we have plenty of Ubuntu machines and lots of systems in place for running them.
I'm aware that there's some documentation for hosting Kayak on a Linux system. Unfortunately 'here's what someone tried once and got working' is not anywhere near as strong as 'we officially try to make this work and here is information on the general things Kayak needs and how it all works'. One of them means that people will take bug reports and the other one implies that if things break I'm basically on my own. I'm not putting crucial fileserver infrastructure into a 'you're on your own' setup; it would be irresponsible.
(Well, it would be irresponsible to do it when we don't have a relatively strong need to do so. We don't here, as manual OmniOS installs are basically as good as a Kayak install and are considerably less complex overall.)