Revising our peculiar ZFS L2ARC trick
Here is a very smart question that my coworkers asked me today: if we have an L2ARC that's big enough to cache basically the entire important bit of one pool, is there much of a point to having that pool's regular data storage on SSDs? After all, basically all of the reads should be satisfied out of the L2ARC so the read IO speed of the actual pool storage doesn't really matter.
(Writes can be accelerated with a ZIL SLOG if necessary.)
Our current answer is that there isn't any real point to using SSDs instead of HDs on such a pool, especially in our architecture (where we have plenty of drive bay space for L2ARC SSDs). In current ZFS the L2ARC is lost on reboots (or pool exports and imports) and has to be rebuilt over time as you read from the regular pool vdevs, but for us these are very rare events anyways; most of our current fileservers have uptimes of well over a year. You do need enough RAM to hold the L2ARC index metadata in memory but I think our contemplated fileserver setup will have that.
(The one uncertainty over memory is to what degree other memory pressure (including from the regular ZFS ARC) will push L2ARC metadata out of memory and thus effectively drop things from the L2ARC.)
Since I just looked this up in the Illumos kernel sources, L2ARC
header information is considered ARC metadata and ARC metadata is
by default limited to one quarter of the ARC (although the ARC
can be most of your memory). If you need to change this, you want
arc_meta_limit. To watch how close to the limit
you're running, you want to monitor
arc_meta_used in the ARC
kernel stats. The current size of (in-memory) L2ARC metadata is
visible in the
l2_hdr_size counts depends on the Illumos version.
In older versions of Illumos I believe that it counts all L2ARC
header data even if the data is currently in the ARC too. In modern
Illumos versions it's purely for the headers of data that's only
in the L2ARC, which is often the more interesting thing to know.)