Our tradeoffs on ZFS ZIL SLOG devices for pools

January 13, 2015

As I mentioned in my entry on the effects of losing a SLOG device, our initial plan (or really idea) for SLOGs in our new fileservers was to use a mirrored pair for each pool that we gave a SLOG to, split between iSCSI backends as usual. This is clearly the most resilient choice for a SLOG setup, assuming that you have SSDs with supercaps; it would take a really unusual series of events to lose any committed data in the pool.

On ZFS mailing lists that I've read, there are plenty of people who think that using mirrored SSDs for your SLOG is overkill for the likely extremely unlikely event of a simultaneous server and SLOG failure. This would obviously save us one SLOG device (or chunk) per pool, which has its obvious attractions.

If we're willing to drop to one SLOG device per pool and live with the resulting small chance of data loss, a more extreme possibility is to put the SLOG device on the fileserver itself instead of on an iSCSI backend. The potential big win here would be moving from iSCSI to purely local IO, which presumably has lower latency and thus would enable to fileserver to respond to synchronous NFS operations faster. The drawback is that we couldn't fail over pools to another fileserver without either abandoning the SLOG (with potential data loss) or physically moving the SLOG device to the other fileserver. While we've almost never failed over pools, especially remotely, I'm not sure we want to abandon the possibility quite so definitely.

(And before we went down this road we'd definitely want to measure the IO latencies of SLOG writes to a local SSD versus SLOG writes to an iSCSI SSD. It may well be that there's almost no difference, at which point giving up the failover advantages would be relatively crazy.)

Since we aren't yet at the point of trying SLOGs on any pools or even measuring our volume of ZIL writes, all of this is idle planning for now. But I like to think ahead and to some extent it affects things like how many bays we fill in the iSCSI backends (we're currently reserving two bays on each backend for future SLOG SSDs).

PS: Even if we have a low volume of ZIL writes in general, we may find that we hit the ZIL hard during certain sorts of operations (perhaps eg unpacking tarfiles or doing VCS operations) and it's worth adding SLOGs just so we don't perform terribly when people do them. Of course this is going to be quite affected by the price of appropriate SSDs.


Comments on this page:

When the slog-on-SSD functionality first came out some people played around with the possibilities:

The example is a bit contrived, but I certainly found it interesting when it came out. Of course now a lot of systems are using SSDs as a caching layer (especially since their cost has come down).

Written on 13 January 2015.
« I've now seen comment spam attempts from Tor exit nodes
What /etc/shells is and isn't »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Tue Jan 13 00:38:50 2015
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.