Wandering Thoughts archives

2015-11-30

My current dilemma: is it worth putting the root filesystem on a SSD

Due mostly to an extremely good deal at the end of last week, I'm getting a mirrored pair of 250 GB SSDs to add to my work machine. As 250 GB is not enough to hold all of the data that's currently on the machine, I'm going to have to be selective about what I put on the SSDs. Which leads to the question I'm currently considering; whether or not it's worth putting my machine's system filesystem on the SSDs, or whether I should use all of the space for a ZFS pool.

(As is the modern way, my system filesystem has /, /usr, and /var all in a single large filesystem.)

The largest downside of putting my system filesystem on the SSDs is that it would consume at least 60 GB of the limited SSD space for good, and probably more so I had a safety margin for various sorts of space usage there (my current root filesystem is about 80 GB, but I'd likely offload some of the things that normally use space there). A secondary downside is that I would have to actively partition the SSDs instead of just giving them to ZFS as whole disks (even on Linux, ZFS is a bit happier to be handed whole disks). I'm using enough disk space for my own data that I'd like to move to the SSDs that losing 60-80 GB of space hurts at least a bit.

(In theory one might worry about /var/log and /var/log/journal seeing constant write traffic. In practice write endurance doesn't seem to be a big issue with SSDs these days, and these particular ones have a good reputation.)

The largest downside of not putting my system filesystem on the SSDs is of course that I don't get SSD performance for it. The big question to me is how much this matters on my system. On the one hand, certainly one of the things I do is to compile code, which requires a bunch of programs and header files from /usr and so on, and I would like this to be fast. On the other hand, my office machine already has 32 GB of RAM so I would hope that the compiler, the headers, and so on are all in the kernel's buffer cache before too long, at which point the SSD speed shouldn't matter. On the third hand, I don't have any actual numbers for how often I'm currently reading things off disk for / as opposed to already having them in memory. I can certainly believe that a modern system loads random scattershot programs and shared libraries and so on from / and /usr on a routine basis, all with relatively random IO, and that this would be accelerated by an SSD.

If I was really determined enough, I suppose I would first try out a SSD root filesystem to see how much activity it saw while I did various activity on the system. If it was active, I'd keep it; if it wasn't, I'd move the root filesystem back to HDs and give the whole SSDs to ZFS. The problem with this approach is that it involves several shifts of the root filesystem, each of which is a disruptive pain in the rear (I probably want to boot off the SSDs if the root filesystem is there, for example). I'm not sure I'm that enthused.

(I'm not interested in trying to make root-on-ZFS work for me, and thus have the root filesystem sharing space flexibly with the rest of the ZFS pool.)

(What I should really do here is watch IO stats on my current software RAID mirror of the root filesystem to see how active it is at various times. If it's basically flatlined, well, I've got an answer. But it sure would be handy if someone had already investigated this (yes, sometimes I'm lazy).)

Sidebar: the caching alternative

In theory Linux now has several ways to add SSDs as caches of HD based filesystems; there's at least bcache and dm-cache. I've looked at these before and my reaction from then still mostly stands; each of them would require me to migrate my root filesystem in order to set them up. They both do have the advantage that I could use only a small amount of disk space on the SSDs and get a relatively significant benefit.

They also both have the disadvantage that the thought of trying to use either of them for my root filesystem and getting the result to be reliable in the face of problems gives me heartburn (for example, I don't know if Fedora's normal boot stuff supports either of them). I'm pretty sure that I'd be a pioneer in the effort, and I'd rather not go there. Running ZFS on Linux is daring enough for me.

SSDRootDilemma written at 22:36:01; Add Comment

2015-11-07

Why I (still) care about SELinux and its flaws

A perfectly sensible reaction to my series of disgruntlements is to ask why I still care enough to write about it. There is all sorts of ill-considered software out there in the world, and disabling SELinux is simple enough. I don't gripe about Ubuntu's AppArmor, for example (which we disable too). As it happens, there are two major reasons that I continue to care about SELinux.

First, the continued existence and popularity of SELinux drains the time and attention of people away from doing other, more usable security work. Linux needs security work of all sorts, including defenses against normal programs being compromised. In fact, the existence and theoretical purity and power of SELinux (and it being integrated into the kernel and major distributions) serves to block most explorations of more usable but more messy solutions. If you propose doing something, especially if you touch user-level programs, I expect that you'll get told 'SELinux already solves that (and better)'.

(If you want an idea of what such solutions might look like, look at the work OpenBSD is doing here with eg the tame()/pledge() system call and other related things.)

Or in short, SELinux is effectively a high stakes gamble with Linux security. People are betting on what is very close to mathematical security, which would be great if it worked but instead often leads to the total failure of SELinux's toxic mistake.

Second, increasingly SELinux is being advocated as a default thing for everyone to use as part of hardening Linux, not just as an extra add-on for the paranoid. This is not exactly a new development (it's why SELinux is the default in Red Hat Enterprise and Fedora), but my strong impression is that it's been ramping up these days (more and more people will loudly tell you that you're doing it wrong if you disable SELinux, for example). When SELinux is supposed to be for everyone, well, it affects me more and more; it's increasingly present and increasingly mandatory.

Also, as part of caring about the direction of Linux in general I care about something that is theoretically supposed to be the Linux answer for (user-level) security issues for everyone. If SELinux is Linux's security solution and I think it's a bad idea, every so often my irritation boils over and I write another blog entry here.

(Real, usable security is one of my hot buttons in general, as you may have either noticed or guessed.)

SELinuxWhyICare written at 00:33:02; Add Comment

2015-11-04

SELinux's usability, illustrated once again

I was recently setting up the normal IPSec/IKE daemon stack on a stock Fedora 22 machine in order to reproduce and get a clean kernel stack trace for a kernel panic involving IPSec (which is very relevant to me). I keep around such a stock Fedora install in a virtual machine because because it's useful to have a reference that hasn't been particularly mangled by me in order to make it usable; as part of that stock-ness, it has SELinux enabled.

As part of IPSec setup, I needed to set up a host key:

ipsec newhostkey --configdir /etc/ipsec.d --output /etc/ipsec.d/thismach.secrets
# print it out for the config file:
ipsec showhostkey --right

I set up an appropriate /etc/ipsec.d/testconn.conf, uncommented the bit in /etc/ipsec.conf to include things from /etc/ipsec.d (why this isn't standard I don't know), started up IPSec services with 'systemctl start ipsec', and SELinux promptly and cheerfully reported that it had blocked pluto's access to /etc/ipsec.d/thismach.secrets because the file did not have the magic type attribute of (I believe) ipsec_key_file_t.

Let me be plain here: this is robot logic. SELinux knew exactly what was going on; an IPSec daemon was trying to read keys from the place that IPSec keys are configured to be and are known to be. The key had even been created using official tools. But because the magic SELinux chicken had not been waved over the file, the request was denied and my IPSec connection failed to start. This is not usable, appropriate security; instead this is the kind of horseshit that causes sysadmins to chuck SELinux over the transom.

(Yes, there are high security environments where it's sensible to worry about 'maybe someone hardlinked /etc/shadow into /etc/ipsec.d and is now exfiltrating it through a pluto vulnerability'. Those environments are not typical and the default security setup should not be designed for them, not if you want people to actually use said (default) security setup.)

I don't have any particular opinions about exactly how SELinux should solve this, but it needs to. Robot logic is one of the deep failings of SELinux and people determinedly standing behind it is one of the big things that leads to SELinux's toxic mistake.

(As usual, the other piece of terrible SELinux usability is that I would have had no idea that things were blowing up due to SELinux if I hadn't been running a desktop on the Fedora 22 virtual machine so that SELinux could pop up an alert. You can imagine how enthused this makes me about ever deploying SELinux on servers.)

SELinuxUsability written at 23:50:02; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.