== ZFS performance and modern solid state disk systems In [[a ZFS discussion on lobste.rs https://lobste.rs/s/k0no1b/why_not_zfs]] I said ([[in part https://lobste.rs/s/k0no1b/why_not_zfs#c_gqdhdf]]): > A certain amount of ZFS’s nominal performance issues are because ZFS > does more random IOs (and from more drives) than other filesystems > do. A lot of the stories about these performance issues date from > the days when hard drives were dominant, with their very low IOPS > figures. I don’t think anyone has done real performance studies in > these days of SSDs and especially NVMe drives, but naively I would > expect the relative ZFS performance to be much better these days since > random IO no longer hurts so much. There are two aspects of this. First, there are obvious areas of ZFS performance where it was limited by IOPS and bandwidth, such as [[deduplication ZFSDedupTodayNotes]] and [[RAIDZ read speeds ZFSViableRaidzWithSSDs]]. Modern NVMe drives have very high values for both, high enough to absorb a lot of reads, and even SATA and SAS SSDs may be fast enough for many purposes. However, there are real uncertainties over things like [[what latency SSDs may have for isolated reads ../tech/SSDIOPSVersusLatency]], so someone would want to test and measure real world performance. For deduplication, it's difficult to get a truly realistic test without actually trying to use it for real, which has an obvious chicken and egg problem. (ZFS RAIDZ also has other unappealing aspects, like the difficult story around growing a raidz vdev.) Second and more broadly, there is the question of [[what does 'good performance' mean on modern solid state disks ../tech/FilesystemPerfQuestionToday]] and how much performance most people can use and care about. If ZFS has good (enough) performance on modern solid state disks, exactly how big the numbers are compared to other alternatives doesn't necessarily matter as much as other ZFS features. Related to this is the question of how does ZFS generally perform on modern solid state disks, especially without extensive tuning, and how far do you have to push programs in order for ZFS to be the performance limit. (There is [[an interesting issue for NVMe read performance on Linux https://github.com/openzfs/zfs/issues/8381]], although much of the discussion dates from 2019.) Of course, possibly people have tested and measured modern ZFS on modern solid state disk setups (SSD or NVMe) and have posted that somewhere. On today's Internet, it's sadly hard to discover this sort of thing through search engines. While we've done some poking at ZFS performance on mirrored SATA SSDs, I don't think we have trustworthy numbers, partly because our primary interest was in performance over NFS on [[our fileservers ../linux/ZFSFileserverSetupIII]], and we definitely observed a number of differences between local and NFS performance. (My personal hope is that ZFS can saturate a modern SATA SSD in a simple single disk pool configuration (or a mirrored one). I suspect that ZFS can't drive NVMe drives at full speed or as fast as other filesystems can manage, but I hope that it's at least competitive for sequential and random IO. I wouldn't be surprised if ZFS compression reduced overall read speeds on NVMe drives for compressed data.)