Chris's Wiki :: blog/solaris/ZFSSLOGLossEffects Commentshttps://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSSLOGLossEffects?atomcommentsDWiki2015-06-22T16:20:44ZRecent comments in Chris's Wiki :: blog/solaris/ZFSSLOGLossEffects.By Chip Schweiss on /blog/solaris/ZFSSLOGLossEffectstag:CSpace:blog/solaris/ZFSSLOGLossEffects:541467b6ab40584079582fe8d95028cca18cce6fChip Schweiss<div class="wikitext"><p>I've found the hard way that loosing a single log device on a running system does at least cause a permanent mark on the pool. There is no data loss, but the zfs folder that had transactions in flight at the time of log device failure gets flagged as having permanent errors. While running off of my DR pool a ZeusRAM failed. Now until I can resync the entire zfs folder the pool has this scar. </p>
<p>Scrub show no data errors. Even a full file compare was run with the primary pool. There is a short thread about this on the Illumos mailing list.</p>
<pre>
root@mir-dr-zfs01:/root# zpool status -v drpool
pool: drpool
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://illumos.org/msg/ZFS-8000-8A
scan: scrub repaired 0 in 73h55m with 0 errors on Wed Jan 21 09:51:29 2015
config:
NAME STATE READ WRITE CKSUM
drpool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
c1t5000C5006251FFEBd0 ONLINE 0 0 0
c1t5000C5006252A867d0 ONLINE 0 0 0
.... lots of disks not listed ....
cache
c2t3d0s0 ONLINE 0 0 0
spares
c1t5000C50062532063d0 AVAIL
c1t5000C50062533883d0 AVAIL
c1t5000C50062533967d0 AVAIL
c1t5000C50062520793d0 AVAIL
c1t5000C50062521153d0 AVAIL
errors: Permanent errors have been detected in the following files:
drpool/ERL/TCIA:<0x0>
</pre>
</div>2015-06-22T16:20:44ZBy Alex on /blog/solaris/ZFSSLOGLossEffectstag:CSpace:blog/solaris/ZFSSLOGLossEffects:8beaaaee87453b3938b60ddedf3b308786dacae5Alex<div class="wikitext"><p>Another aspect to consider is performance: if you depend on the ZIL to speed up synchronous writes, loosing your sole, non-mirrored slog device, will degrade performance until you have it replaced, which might not be acceptable. (Typically, for a vmware NFS backend, where all writes are synchronous)</p>
</div>2015-01-15T10:16:39ZBy James on /blog/solaris/ZFSSLOGLossEffectstag:CSpace:blog/solaris/ZFSSLOGLossEffects:cf01754e97575a62995e7005f8698355a0440494James<div class="wikitext"><p>"ZFS allows you to import pools that have lost their SLOG, even if they were shut down uncleanly and data has been lost" This is only true for poll versions 19 and above, before then the loss of a slog did cause pool loss, which is probably the source of your impression.</p>
</div>2015-01-11T14:13:14Z