Chris's Wiki :: blog/linux/ZFSonLinuxWeakAreas Commentshttps://utcc.utoronto.ca/~cks/space/blog/linux/ZFSonLinuxWeakAreas?atomcommentsDWiki2014-05-08T22:07:20ZRecent comments in Chris's Wiki :: blog/linux/ZFSonLinuxWeakAreas.By Evert Wiesenekker on /blog/linux/ZFSonLinuxWeakAreastag:CSpace:blog/linux/ZFSonLinuxWeakAreas:60c5852925cf1f03921fedf10827370af72da6edEvert Wiesenekker<div class="wikitext"><p>Well thanks to your posting I now know why my first time experiences with Linux (I tried Centos and Ubuntu 12) with ZFS gave memory problems. I was running three virtual machines on Virtual Box and while I was shutting down VM's memory was not released. Sometimes VM's got aborted. </p>
<p>I decided to drop ZFS and am now running the same VM's on the ext filesystem without any problems.</p>
</div>2014-05-08T22:07:20ZBy linux-user on /blog/linux/ZFSonLinuxWeakAreastag:CSpace:blog/linux/ZFSonLinuxWeakAreas:2aaf536f4ea3ce965c832082e242ff329cd145fclinux-user<div class="wikitext"><p>I've started using ZFS a lot and like it. However here are two problems no one else has mentioned.</p>
<p>1. Putting a zfs vdev on a luks encrypted partition doesn't work well. The problem is that the mount has to be deferred until a password is requested and the partition unlocked. I believe that the problem is solvable; the solution is probably quite simple. However, I have spent too much time researching and trying, w/o luck.</p>
<p>2. Since ubuntu boot ISOs don't have zfs, booting from a flash drive to rescue your system won't let you access your zfs files. I know that you're supposed to be able to build a customized ISO with extra modules. However the process is quite tedious and often fails. E.g., for a long time, the gnome usb-creator program produced garbage.</p>
</div>2013-12-19T16:56:04ZBy ewwhite on /blog/linux/ZFSonLinuxWeakAreastag:CSpace:blog/linux/ZFSonLinuxWeakAreas:21d0b89d1c9f8cc14c6277e8af83130615d32f78ewwhitehttp://serverfault.com/users/13325/ewwhite<div class="wikitext"><p>I've had ZFS on Linux in production systems backed by RHEL/CentOS for over a year, moving quickly away from my NexentaStor installations.</p>
<ul><li>I don't think it makes sense to leverage ZFS as a boot OS when Linux has other stable/proven alternatives. ZFS is for the data drives only.<p>
</li>
<li>Getting the filesystems to mount deterministically usually requires setting an /etc/systemid value.<p>
</li>
<li>For pool creation and device naming, I use the WWNs of the devices found in /dev/disk-by-id rather than the typical /dev/sdX entries. This makes the pools somewhat portable and immune to problems that come from adding/removing controllers and device renaming.<p>
</li>
<li>I can't speak to TRIM, ACLs, etc.. It hasn't been a problem yet in my usage.<p>
</li>
<li>I need to double-check my disk-failure history. I'm not running spares on most of my ZFS on Linux data pools, but I don't believe you need the FMA to trigger things like a spare rebuild. I do think the zpool "autoreplace" property handles this.<p>
</li>
<li>For memory, I manually limit ARC size to about 40%-45% of available RAM since ZFS and the Linux virtual memory subsystem tend to fight. This resolved long-term issues with things like rsyncing large file trees. There are a few other knobs that need twisting, but performance has been great.</li>
</ul>
</div>2013-12-15T17:11:43ZBy bassu on /blog/linux/ZFSonLinuxWeakAreastag:CSpace:blog/linux/ZFSonLinuxWeakAreas:ea73758e205206e2321f8b44bd2f82fb1e192f6bbassu<div class="wikitext"><p>It is quite rudimentary to keep your OS and storage separate -- simple as that. I am not sure why people are so obsessive on mixing ZFS with the root and boot whereas it's main purpose is to do storage just!</p>
<p>As for the memory leaks, they are every where, I have seen much worse in Xen, KVM, Apache and you-name-it other common apps on Linux. Might be too common with ZFS but keep in mind that the new technologies on existing platforms take time to mature like any other OSS project out there!</p>
</div>2013-10-04T05:32:18ZBy Chris Siebenmann on /blog/linux/ZFSonLinuxWeakAreastag:CSpace:blog/linux/ZFSonLinuxWeakAreas:b44c0928aba54afe34a2b545727868063f966e8eChris Siebenmann<div class="wikitext"><p>The memory issues were also the killer for us, unfortunately. Part of
the problem (for us) was that we wouldn't have been able to test for
them in advance and in fact a production fileserver might initially work
and then fall over later as the usage patterns change. We might have
been fine, we might not have been, and the risk and uncertainty were too
high.</p>
</div>2013-09-19T18:28:18ZBy trx on /blog/linux/ZFSonLinuxWeakAreastag:CSpace:blog/linux/ZFSonLinuxWeakAreas:c8ab69362ad8c7d0e76a8b97b4dd5958db711292trx<div class="wikitext"><p>Thank you very much for this compact overview.</p>
<p>It seems like, in most of the cases, only show-stopper is the last problem: I can boot from other device/partition/file system, I can postpone mounting of ZFS file systems on boot, I can use more persistent device naming scheme, I'll find a way around performance-related issues for the start and I'll monitor output of 'zpool status' and SMART daemon to find failed drives, but I cannot deal with random memory exhaustion. That's just not acceptable for file system meant to be reliable. </p>
<p>Hope that will be the highest priority for ZoL contributors. Everything else can be improved later...</p>
</div>2013-09-19T14:03:52ZFrom 86.146.235.176 on /blog/linux/ZFSonLinuxWeakAreastag:CSpace:blog/linux/ZFSonLinuxWeakAreas:220d5ce82307b1be9319b8eb79cfc9ce9b2778ceFrom 86.146.235.176<div class="wikitext"><p>TRIM is also available in ZFS on Solaris 11.1</p>
</div>2013-09-05T08:50:27ZBy chris2 on /blog/linux/ZFSonLinuxWeakAreastag:CSpace:blog/linux/ZFSonLinuxWeakAreas:80d1af754b177bbb3b0b4115d0b124f3d3b17c54chris2<div class="wikitext"><p>In my limited experiments, I had no problem with a zfs / and an ext2 /boot and an initramfs with zfs included (obviously). No GRUB tweaks required.</p>
</div>2013-09-04T14:04:50Z