Wandering Thoughts archives

2015-12-11

The ArchLinux wiki has quietly become a good resource for me

I mentioned this on Twitter and it keeps showing up in my entries (eg), so I might as well say it explicitly: increasingly, the ArchLinux wiki is becoming one of my relatively highly trusted information sources, both for Linux specific information and also often for more general things like X. I don't yet search it first, but when it comes up in my web searches I'm more and more inclined to stop looking at anything else.

I like the ArchLinux wiki for two reasons. First, they have all sorts of information on all sorts of things, many of them kind of geeky and obscure, and the content is written for a fairly technical audience (and often includes specific commands that I can immediately use). Second, the information seems to be pretty solid and trustworthy, based both on reading about areas that I already know and using information from the wiki. I don't know exactly how ArchLinux has managed to wind up with such a good technical resource, but I imagine that it says something about the sort of users that it attracts (since, well, it takes users to build and maintain a wiki).

Of course, all of this makes me somewhat curious about ArchLinux itself; that it has a cool wiki suggests that it might be cool itself. Sadly I don't really have any machines that I'm looking to (re)install with alternate Linuxes any time soon, so I'm unlikely to do more than read about it (if that). Still, you never know. Maybe someday I'll get sufficiently disgruntled with all of the major alternatives.

(In theory I could install ArchLinux in a virtual machine. In practice I generally don't expect this to tell me anything interesting about an OS, and it suffers from the usual 'why am I doing this?' problem I have with just playing around with stuff in general. I've never been the kind of person who had installs of N different Linuxes sitting in partitions on their drive, just so they could play around with them.)

ArchLinuxWikiLike written at 00:57:27; Add Comment

2015-11-30

My current dilemma: is it worth putting the root filesystem on a SSD

Due mostly to an extremely good deal at the end of last week, I'm getting a mirrored pair of 250 GB SSDs to add to my work machine. As 250 GB is not enough to hold all of the data that's currently on the machine, I'm going to have to be selective about what I put on the SSDs. Which leads to the question I'm currently considering; whether or not it's worth putting my machine's system filesystem on the SSDs, or whether I should use all of the space for a ZFS pool.

(As is the modern way, my system filesystem has /, /usr, and /var all in a single large filesystem.)

The largest downside of putting my system filesystem on the SSDs is that it would consume at least 60 GB of the limited SSD space for good, and probably more so I had a safety margin for various sorts of space usage there (my current root filesystem is about 80 GB, but I'd likely offload some of the things that normally use space there). A secondary downside is that I would have to actively partition the SSDs instead of just giving them to ZFS as whole disks (even on Linux, ZFS is a bit happier to be handed whole disks). I'm using enough disk space for my own data that I'd like to move to the SSDs that losing 60-80 GB of space hurts at least a bit.

(In theory one might worry about /var/log and /var/log/journal seeing constant write traffic. In practice write endurance doesn't seem to be a big issue with SSDs these days, and these particular ones have a good reputation.)

The largest downside of not putting my system filesystem on the SSDs is of course that I don't get SSD performance for it. The big question to me is how much this matters on my system. On the one hand, certainly one of the things I do is to compile code, which requires a bunch of programs and header files from /usr and so on, and I would like this to be fast. On the other hand, my office machine already has 32 GB of RAM so I would hope that the compiler, the headers, and so on are all in the kernel's buffer cache before too long, at which point the SSD speed shouldn't matter. On the third hand, I don't have any actual numbers for how often I'm currently reading things off disk for / as opposed to already having them in memory. I can certainly believe that a modern system loads random scattershot programs and shared libraries and so on from / and /usr on a routine basis, all with relatively random IO, and that this would be accelerated by an SSD.

If I was really determined enough, I suppose I would first try out a SSD root filesystem to see how much activity it saw while I did various activity on the system. If it was active, I'd keep it; if it wasn't, I'd move the root filesystem back to HDs and give the whole SSDs to ZFS. The problem with this approach is that it involves several shifts of the root filesystem, each of which is a disruptive pain in the rear (I probably want to boot off the SSDs if the root filesystem is there, for example). I'm not sure I'm that enthused.

(I'm not interested in trying to make root-on-ZFS work for me, and thus have the root filesystem sharing space flexibly with the rest of the ZFS pool.)

(What I should really do here is watch IO stats on my current software RAID mirror of the root filesystem to see how active it is at various times. If it's basically flatlined, well, I've got an answer. But it sure would be handy if someone had already investigated this (yes, sometimes I'm lazy).)

Sidebar: the caching alternative

In theory Linux now has several ways to add SSDs as caches of HD based filesystems; there's at least bcache and dm-cache. I've looked at these before and my reaction from then still mostly stands; each of them would require me to migrate my root filesystem in order to set them up. They both do have the advantage that I could use only a small amount of disk space on the SSDs and get a relatively significant benefit.

They also both have the disadvantage that the thought of trying to use either of them for my root filesystem and getting the result to be reliable in the face of problems gives me heartburn (for example, I don't know if Fedora's normal boot stuff supports either of them). I'm pretty sure that I'd be a pioneer in the effort, and I'd rather not go there. Running ZFS on Linux is daring enough for me.

SSDRootDilemma written at 22:36:01; Add Comment

2015-11-07

Why I (still) care about SELinux and its flaws

A perfectly sensible reaction to my series of disgruntlements is to ask why I still care enough to write about it. There is all sorts of ill-considered software out there in the world, and disabling SELinux is simple enough. I don't gripe about Ubuntu's AppArmor, for example (which we disable too). As it happens, there are two major reasons that I continue to care about SELinux.

First, the continued existence and popularity of SELinux drains the time and attention of people away from doing other, more usable security work. Linux needs security work of all sorts, including defenses against normal programs being compromised. In fact, the existence and theoretical purity and power of SELinux (and it being integrated into the kernel and major distributions) serves to block most explorations of more usable but more messy solutions. If you propose doing something, especially if you touch user-level programs, I expect that you'll get told 'SELinux already solves that (and better)'.

(If you want an idea of what such solutions might look like, look at the work OpenBSD is doing here with eg the tame()/pledge() system call and other related things.)

Or in short, SELinux is effectively a high stakes gamble with Linux security. People are betting on what is very close to mathematical security, which would be great if it worked but instead often leads to the total failure of SELinux's toxic mistake.

Second, increasingly SELinux is being advocated as a default thing for everyone to use as part of hardening Linux, not just as an extra add-on for the paranoid. This is not exactly a new development (it's why SELinux is the default in Red Hat Enterprise and Fedora), but my strong impression is that it's been ramping up these days (more and more people will loudly tell you that you're doing it wrong if you disable SELinux, for example). When SELinux is supposed to be for everyone, well, it affects me more and more; it's increasingly present and increasingly mandatory.

Also, as part of caring about the direction of Linux in general I care about something that is theoretically supposed to be the Linux answer for (user-level) security issues for everyone. If SELinux is Linux's security solution and I think it's a bad idea, every so often my irritation boils over and I write another blog entry here.

(Real, usable security is one of my hot buttons in general, as you may have either noticed or guessed.)

SELinuxWhyICare written at 00:33:02; Add Comment

2015-11-04

SELinux's usability, illustrated once again

I was recently setting up the normal IPSec/IKE daemon stack on a stock Fedora 22 machine in order to reproduce and get a clean kernel stack trace for a kernel panic involving IPSec (which is very relevant to me). I keep around such a stock Fedora install in a virtual machine because because it's useful to have a reference that hasn't been particularly mangled by me in order to make it usable; as part of that stock-ness, it has SELinux enabled.

As part of IPSec setup, I needed to set up a host key:

ipsec newhostkey --configdir /etc/ipsec.d --output /etc/ipsec.d/thismach.secrets
# print it out for the config file:
ipsec showhostkey --right

I set up an appropriate /etc/ipsec.d/testconn.conf, uncommented the bit in /etc/ipsec.conf to include things from /etc/ipsec.d (why this isn't standard I don't know), started up IPSec services with 'systemctl start ipsec', and SELinux promptly and cheerfully reported that it had blocked pluto's access to /etc/ipsec.d/thismach.secrets because the file did not have the magic type attribute of (I believe) ipsec_key_file_t.

Let me be plain here: this is robot logic. SELinux knew exactly what was going on; an IPSec daemon was trying to read keys from the place that IPSec keys are configured to be and are known to be. The key had even been created using official tools. But because the magic SELinux chicken had not been waved over the file, the request was denied and my IPSec connection failed to start. This is not usable, appropriate security; instead this is the kind of horseshit that causes sysadmins to chuck SELinux over the transom.

(Yes, there are high security environments where it's sensible to worry about 'maybe someone hardlinked /etc/shadow into /etc/ipsec.d and is now exfiltrating it through a pluto vulnerability'. Those environments are not typical and the default security setup should not be designed for them, not if you want people to actually use said (default) security setup.)

I don't have any particular opinions about exactly how SELinux should solve this, but it needs to. Robot logic is one of the deep failings of SELinux and people determinedly standing behind it is one of the big things that leads to SELinux's toxic mistake.

(As usual, the other piece of terrible SELinux usability is that I would have had no idea that things were blowing up due to SELinux if I hadn't been running a desktop on the Fedora 22 virtual machine so that SELinux could pop up an alert. You can imagine how enthused this makes me about ever deploying SELinux on servers.)

SELinuxUsability written at 23:50:02; Add Comment

2015-10-29

USB mouse polling rates under Linux

I'm behind the times, so I only recently discovered that USB mice have a polling rate, that this polling rate can often be adjusted under Linux, and that you might want to do so. A good starting point for this is the Arch Linux wiki page on mouse polling rate, but it assumes some basic background that I had to think through.

The advantage of a higher mouse polling rate is not that your mouse moves any faster. Instead, it is that the mouse reports your motion changes sooner (what I think gamers are interested in) and that it reports them more frequently. More frequent reporting usually means more frequent updating, which in turn can lead to smoother, more continuous motion. On the other side, too-slow updates can make mouse motion feel subtly jerky.

(Mouse movement under X is generally already accelerated to some degree once you move the mouse far enough. See at least xset's mouse settings and probably the preferences for your desktop environment if you use one.)

If what you mostly care about is smooth motion, I think that there's not much benefit to a mouse polling rate that is massively above the refresh rate of your display. After all, updating the mouse cursor position 500 times a second is relatively pointless if you only see it 60 times a second. At the same time, a relatively high polling rate has the advantage that it gives X and programs a lot of time to respond to your mouse movement before the next display update, rather than perhaps only getting the position update at the last moment.

The Arch page is a little bit confusing about the normal default Linux mouse polling rate. Based on looking at the kernel source, how it appears to work is that USB mice can tell us their desired polling rate and by default the kernel just accepts that. The normal USB mouse rate is 125 Hertz, so most USB mice are going to tell the kernel to use that; however, I believe some mice can be (or are) set to ask for a higher polling rate, and if they do the kernel will honour that by default.

(Your LCD panel likely refreshes at a nominal 60 Hz rate, so I'd expect that 125 Hz provides plenty of responsiveness headroom; still, increasing the polling rate is unlikely to hurt.)

As covered in the Arch page, it's possible to explicitly set the Linux mouse polling rate and thus increase it from 125 Hertz. This is done by setting a non-zero value for the usbhid module's mousepoll module parameter (see the Arch page for what values you want). The Arch page describes an elaborate procedure to change the mousepoll parameter without rebooting, which is fortunately not necessary these days as kernel modules expose their module parameters in /sys/module/<NAME>/parameters. To change and test mouse polling rates on the fly, you first write the new mousepoll value with:

echo N >/sys/module/usbhid/parameters/mousepoll

and then unplug and replug your USB mouse (or mice). The unplug and replug bit is necessary because the usbhid module only sets up the mouse polling interval for a specific USB mouse when the mouse is plugged in. Changing mousepoll thus only changes the polling rate for future mice, not currently attached ones.

(A mousepoll value of 0 means 'use whatever the mouse would like', which is probably mostly going to be '125 Hertz'. Note that this implies that setting an explicit conservative mousepoll value may cause some USB mice to be polled at a lower rate than you'd get if you just left things alone.)

What you've set Linux's mouse polling rate to is not necessarily the polling rate that you actually get, for various reasons. You can see what the actual achieved rate is using the useful evhz program, which measures the rate based on how fast it receives evdev events. This isn't restricted to just USB mice; it will also reports values for PS/2 mice and probably any mouse-like thing that Linux's evdev system supports.

The Arch page also talks about displaying the USB device polling rate, which can be dug out of /sys/kernel/debug/usb/devices. Based on my own testing on Fedora 22, the reported device Ivl does not have much to do with the mouse polling rate. All of my USB mice report 10ms (which is only 100 Hz), but evhz disagrees and reports 125 Hz normally for a standard USB mouse and a 500 Hz polling rate if I set mousepoll to 2.

Regardless of what you have mousepoll set to, it's possible for a USB mouse (or at least something claiming to be one) to respond at a slower rate. In other words, Linux's USB mouse polling rate is a maximum, not a minimum. Hopefully such hardware is rare, especially when it comes to real mice.

(I suspect that KVM over IP systems that support virtual USB mice have relatively low maximum mouse poll rates no matter what Linux asks for.)

USBMousePollingRate written at 02:03:29; Add Comment

2015-10-12

Why we hold kernels and other packages on our Ubuntu machines

Apparently, it may be the case that if you do not hold kernels and other packages on your Ubuntu machines, 'apt-get autoremove' may do what I want for limiting installed Ubuntu kernels. Unfortunately this is not an option for us (and we're probably not alone in this).

Our fundamental rule is that for certain sorts of packages, we install or update them only occasionally, under controlled circumstances. For kernels, this is because reboots must be planned (and we don't like divergent kernel versions on different machines). For various other packages like GRUB, it's because their package updates have historically caused problems for our hands-off update mechanisms.

(And for some packages it is because they're so sensitive that we can't risk an update unless we're very sure it's going to work well, and we've been burned before by even theoretically small updates. Our Samba servers do not update their Samba versions unless we explicitly tell them to, for example.)

In RHEL/CentOS, we can do this manually during update time by explicitly excluding those package, eg 'yum update --exclude "kernel*"', and in fact we routinely do this when doing updates on our RHEL machines. However, as far as I can see apt-get has no such on the fly mechanism; instead, you must explicitly hold these packages and then explicitly do something to force them to be updated later. Problems here are compounded by how 'apt-get upgrade' does not update packages that have new dependencies. Since Ubuntu installs new kernels by updating the dependencies of the meta-packages (eg linux-image-generic), we must override this behavior too. Oh, and we only want to do this for whatever we're specifically doing at the time, not all packages that we've held.

It's possible that the official process we should be following is un-holding just the kernel packages and then doing 'apt-get dist-upgrade', never mind the somewhat scary warnings in the documentation for dist-upgrade. In practice it's unlikely that we're going to switch to a more complex and less controlled update mechanism just to perhaps have 'apt-get autoremove' remove obsolete kernel packages.

(Less controlled? Yes, as 'apt-get dist-upgrade' will upgrade (with new dependencies) any and all packages that are being held back for this reason, not just kernel packages. We have seen such held back packages want to do rather significant violence to the current package sets of systems for whatever inscrutable Ubuntu reasons. It's possible that this is because the new package version 'recommends' a pile of additional things and apt is defaulting to installing them too, which seems to be a problem with 'apt-get install' for us. The whole situation with this irritates me, but that's another rant.)

UbuntuHoldingNecessary written at 00:59:17; Add Comment

2015-10-07

The irritation of all of the Ubuntu kernels you wind up with

Let's start with my tweet:

I really dislike just how difficult Debian and Ubuntu make it to only keep the last N kernels and remove all the rest. What a stupid mess.

(I may be unfairly slamming Debian here, but if so they have their own terrible problem with kernel updates.)

One of the many stupid things about the Ubuntu kernel update process is how if you just use 'apt-get install' to install kernel updates, you'll wind up with a steadily increasing collection of old kernels. This isn't because Debian and Ubuntu care deeply about never removing a good kernel out from underneath you, as Ubuntu will happily overwrite a good kernel with a bad one sometimes (and Debian may be worse here). Instead, as far as I can tell, it is just because APT doesn't support this and no one has fixed it in more than a decade.

This matters for reasons beyond disk space and clutter in your list of installed package. Dpkg kernel updates are already kind of slow and definitely verbose enough that you can miss important things, and every installed kernel you have adds its own contribution to both the slowness and the verbosity. The fewer installed kernels you have, the faster things update and the higher the chance is that you'll notice any problems.

As they say, but wait, it gets worse. Not only does apt not support limiting how many kernels it keeps around, but Ubuntu (and Debian) don't even ship with an add-on command to remove such surplus kernels for you. This is asinine. Essentially everyone is going to want to do this, it is something that is surprisingly tricky to get right (and easy to get wrong in dangerous ways), and the best that Ubuntu has to offer is Stack Overflow answers full of arcane (and incomplete) command line incantations, people's homegrown scripts, and recommendations of packages with pages of new dependencies on normal systems.

Since cleaning up this mess would be far too much work, our systems totter along with an increasing collection of totally useless and pointless kernels (most of them with serious security holes, since the existence of serious holes is usually what prompts us to upgrade kernels). I rather enjoy when we get to reinstall machines, because it means starting from scratch with a clean and very short list of kernels.

(I've written about this in quieter tones, for example in How Ubuntu and Fedora each do kernel packages. That entry also discusses why it happens this way; see the comments for some additional hair-raising details.)

UbuntuUnlimitedKernels written at 01:58:53; Add Comment

2015-10-05

I don't trust Linux distributions to leave directories alone

In yesterday's entry I said in passing:

My view is that basically every directory that your OS distribution creates is best left alone and unused, and thus should be left on the root filesystem. [...]

In theory there are any number of directories on typical Linux distributions (and typical Unix distributions in general) that should be safe for you to use without disturbance by the OS. There's things like /usr/local, /home, /opt, and yes, some of you are laughing right now. In practice, I've been through enough experiences that I no longer trust Linux distributions to leave any directories they know about alone. Sooner or later someone is going to drop files or subdirectories in there, or change the permissions or SELinux context, or mandate that they must be on the root filesystem because of some requirement, and so on and so forth. Sometimes the guilty party will be the OS itself; sometimes it will be third parties who are packaging things for the OS and decide that /opt or /usr/local or whatever make a great place to put their stuff.

The practical reality of modern Linux life is that the only directories you can trust the OS not to screw with are directories that the OS has no idea exist, ie ones that you make up and create yourself. If the OS creates it, even if it's empty and explicitly marked 'for local sysadmin use only', using it is dangerous in practice. Sooner or later you're likely to regret it.

(Sometimes you have no choice because a program has been configured to look there or restrict itself to things there.)

Since directory names for local things are generally arbitrary anyways, you should make your life simpler and pick your own new names (I suggest organization-based ones).

The one exception to this is that if you package things in the distribution's native packaging scheme (.debs, RPMs, etc), my strong opinion is that you should default to putting them into the normal system locations even if it's local software. Sometimes this won't be possible (eg if you're packaging a conflicting version of a program), but when it is I think it's going to make your life easier. And as I've found out, there are things that really want to use the system locations.

DistroDirectoryDistrust written at 01:26:37; Add Comment

2015-10-04

There's no point in multiple system filesystems any more

Over the years I've written a number of things about how I think one should partition up your disk or disks for your system filesystems. Three things have been constant in those successive updates; the sizes of everything has kept growing, the amount I trust things like software RAID and LVM has kept increasing, and the number of separate filesystems has kept shrinking. Today I feel like writing down my current views, which are really simple. To wit, I feel there's no point in having more than /, the root filesystem.

It's been some time since a separate /var or /usr made sense or was even supported. I stubbornly clung to a separate /boot for a long time, but I don't see any point to it any more, so I say make your life simpler by putting it in the root filesystem too. As far as size goes, I like to give the root filesystem 80 to 100 GB of disk space so that I have room for crazy things like saving a copy of every RPM I've ever downloaded, but in today's increasingly SSD-based environment you might want to be more parsimonious. I suspect 50 to 60 GB will cover most everyone.

(If you have lots of disk space available for this purpose, make two identical-sized partitions and use the second as a backup root filesystem during major things like OS version upgrades. My experience with this is that having a backup root filesystem is very reassuring.)

I'm still a very strong proponent of mirrored system disks. While I like LVM in general, I wouldn't put / into an LVM volume; I prefer to reserve LVM volumes entirely for my own data, so I can do crazy things like convert to ZFS without affecting the root filesystem (here's my disk layout for ZFS). This means I use good old fashioned software RAID mirrors on actual partitions (GPT partitions these days). Modern Linux installers make this relatively simple to set up.

As far as the filesystem type for / goes, use something well tested and solid (and with solid GRUB boot support, since /boot is part of it). Today that means ext3 or ext4. If you want to flirt with Btrfs or ZFS on Linux, well, go for it, but you're probably going to need a separate /boot and have all sorts of annoyances (and you're probably reading this entry mostly for amusement value).

My view is that basically every directory that your OS distribution creates is best left alone and unused, and thus should be left on the root filesystem. I'm willing to put a small amount of things in /opt if they insist, but I don't have my own home directory and data in /home; I have a separate directory hierarchy (and separate filesystem) for my actual home directory. The same thing is true for website files, database storage, and so on; I don't accept the stock defaults of various places in /var.

I'm sufficiently old fashioned that I still make a separate swap partition; these days I use 1G or 2GB as the size, which is enough to keep Linux happy without risking death by swapping. I use a software RAID mirror for this too, because why not. More daring people can swap to a file in the root filesystem, although that may be harder to set up during the initial system install.

I have no idea how you want to set up your system filesystem(s) if you want (or have to have) an encrypted root filesystem. Perhaps someday I'll have to worry about that, but not right now.

(I have no particular opinions on what you should do on a laptop where you have a single disk and that's it. On my casual usage work laptop, I think I just made everything a single filesystem on the single disk and shrugged about my home directory being in /home and sharing space with the root filesystem.)

EverythingInRootFS written at 01:24:49; Add Comment

2015-09-24

We're probably going to need new Linux iSCSI target software

When I think ahead to our theoretical 2018 fileserver refresh, one of my thoughts is that we're probably going to need new iSCSI target software. We're currently using IET and while we like it and there's is nothing deeply wrong with it I have to admit that it lacks some moderately important features and the pace of its development is what could politely be called 'quiet'. In fact it's sufficiently quiet that I don't know if IET will be adapted to future Linux kernels, and by 2018 even 'enterprise' long term support distros will likely be using future kernels.

If we're going to change iSCSI target software the obvious choice is the LIO target, which is the current in-kernel implementation and hopefully also the future one (the kernel changed implementations once already). There are other alternatives (the ArchLinux wiki has a decent overview), but none of them seem compelling enough to go outside the standard kernel and thus what Linux distributions will package the tools for and support as (relatively) standard.

(On the flipside, I haven't conducted any sort of deep evaluation of the other options. I wasn't impressed with anything apart from IET in my original evaluation, but that was years ago.)

I've looked into LIO some and I can't say I'm terribly enthused, for two reasons. LIO configuration is rather complicated, and it really wants to be done through a command line tool instead of a configuration file (and an interactive one at that); the latter is a bad flaw that I've written about before. LIO's tool save the resulting live configuration in JSON file(s) and in theory you can create the file yourself by hand. LIO also has a Python API, rtslib, so another option would be to create our own program to set up the iSCSI target configuration (either once or on boot) from a simpler file format.

At some point I'm going to need to test and experiment with LIO. However I don't know if it's worthwhile to do it just yet as opposed to about two years from now, since a lot can change in that sort of time.

(In a way, worrying about specific software is silly at this point. Things in the open source world can change drastically over two years and anyways there are any number of things that are up in the air about a future fileserver design. I just think about this now because I've wound up thinking that IET is getting long in the tooth and kind of neglected by now, so we're going to have to do something about it sooner or later.)

NewLinuxISCSITargetThoughts written at 03:08:25; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.