A peculiar use of ZFS L2ARC that we're planning
In our SAN-based fileserver infrastructure we have a relatively small but very important and very busy pool. We need to be able to fail over this pool to another physical fileserver, so its data storage has to live on our iSCSI backends. But even with it on SSDs on the backends, going over the network with iSCSI adds latency and probably reduces bandwidth somewhat. We're not willing to move the pool to local storage on a fileserver; it's much more important that the pool stay up than that it be blindingly fast (especially since it's basically fast enough now). Oh, and it's generally much more important that reads be fast than writes.
But there is a way around this, assuming that you're willing to live with failover taking manual work (which we are): a large local L2ARC plus the regular SAN data storage. This particular pool is small enough that we basically get all of its data into an affordable L2ARC SSD (and certainly all of the active data). A local L2ARC gives us the local (read) IO for speed and effectively reduces the actual backend data storage to a persistence mechanism.
What makes this work is that a pool will import and run without its L2ARC device(s). Because L2ARC is only a cache, ZFS is willing to bring up a pool with missing L2ARC devices. If we have to fail over the pool to another fileserver it will come up without L2ARC and be slower, but at least it will come up.
(A local L2ARC plus SAN data storage works for any pool and is what we're planning in general when we renew our fileserver infrastructure (hopefully soon). But it may have limited effectiveness for large pools, based on usage patterns and so on. What makes this particular pool special is that it's small enough that the L2ARC can basically store all of it. And the L2ARC doesn't need to be mirrored or anything expensive.)
PS: given that this pool is already on SSDs, I don't think that there's any point to a separate log device. Since a SLOG is essential to the pool, it would have to live in the SAN and be mirrored; we couldn't get away with a local SLOG plus the data in the SAN.