2015-08-26
Why I wind up writing my own (sysadmin) tools
I have a tendency to write my own versions of tools, like bulk IO measurement tools, versions of netcat, and network bandwidth reporters. Historically there have been two official reasons for this and a third unofficial one.
First, when I write a tool myself I know exactly what it's doing and what its output means. This has historically been a problem with other people's tools, including system supplied ones (eg). If I'm going to wind up reading the source of a program just to be sure I really understand what it's telling me, I may not be saving much time over writing my own.
(Also, if I write it myself the tool can generally wind up doing exactly what I want in the way that I want it. Other people's tools may have various sorts of rough edges and little annoyances for me.)
Second, because finding and testing and investigating the existing potential options is generally a pain in the rear. People have written a very many tools for doing disk IO benchmarking, for example, and as an outsider it is all a big mess. I'm honest enough to admit that for almost anything I want, there probably are tools out there that would meet my needs and make me happy; the problem is finding them and sorting out the wheat from the chaff. It's unfortunately easy for it to be less work and less frustration to write my own, or at least to feel like it is or will be.
(The frustration primarily comes from investing time into things that turn out to not be useful despite all of the effort, especially if they have active irritations. And most programs have active irritations somewhere.)
The third, unofficial reason is that programming is fun. When it's going well, it's a pure shot of the 'solving problems' drug that I get only in moderation when I'm doing less glamorous activities like building and testing new systems (and never mind the humdrum daily routine; the problems I solve there are all very small). It's especially fun when I contrast it with the slogging work of searching the Internet for a pile of programs that might perhaps do what I want, getting copies of all of them, testing each out, and throwing most of them away. That's not solving problems, that's shuffling files around.
2015-08-25
One view on practical blockers for IPv6 adoption
I recently wound up reading Russ White's Engineering Lessons, IPv6 Edition (via), which is yet another meditation by a network engineer about why people haven't exactly been adopting IPv6 at a rapid pace. Near the end, I ran across the following:
For those who weren't in the industry those many years ago, there were several drivers behind IPv6 beyond just the need for more address space. [...]
Part of the reason it's taken so long to deploy IPv6, I think, is because it's not just about expanding the address space. IPv6, for various reasons, has tried to address every potential failing ever found in IPv4.
As a sysadmin, my reaction to this is roughly 'oh god yes'. One of the major pain points in adding IPv6 (never mind moving to it) is that so much has to be changed and modified and (re)learned. IPv6 is not just another network address for our servers (and another set of routes); it comes with a whole new collection of services and operational issues and new ways of operating our networks. There are a whole host of uncertainties, from address assignment (both static and dynamic) on upwards. Given that right now IPv6 is merely nice to have, you can guess what this does to IPv6's priority around here.
Many of these new things exist primarily because the IPv6 people decided to solve all of their problems with IPv4 at once. I think there's an argument that this was always likely to be a mistake, but beyond that it's certainly made everyone's life more complicated. I don't know for sure that IPv6 adoption would be further along if IPv6 was mostly some enlarged address fields, but I rather suspect that it would be. Certainly I would be happier to be experimenting with it if that was the case.
What I can boil this down to is the unsurprisingly news that large scale, large scope changes are hard. They require a lot of work and time, they are difficult for many people to test, and they are unusually risky if something goes wrong. And in a world of fragile complexity, their complexity and complex interactions with your existing environment are not exactly confidence boosters. There are a lot of dark and surprising corners where nasty things may be waiting for you. Why go there until you absolutely have to?
(All of this applies to existing IPv4 environments. If you're building something up from scratch, well, going dual stack from the start strikes me as a reasonably good idea even if you're probably going to wind up moving slower than you might otherwise. But green field network development is not the environment I live in; it's rather the reverse.)
2015-08-06
Two factor authentication and emergency access to systems
One of the things that makes me hesitant to whole heartedly embrace two factor authentication for system administration is how to deal with emergency system access in an exceptional situation. A lot of 2FA systems seem to involve daemons and central services and other things with a lot of moving parts that are potentially breakable, especially in extreme situations like partial network failures or machine failures. If you require all of this to be working before you can get in as a sysadmin or as root, you may have serious problems in emergencies. But if you create some bypass for 2FA, you're at least weakening and perhaps defeating the protections 2FA is supposed to be giving you; an attacker just needs to compromise your back door access instead of your regular access.
Some organizations guard sufficiently sensitive information and important systems that the answer is 'in an emergency, we go to the machine room and boot from recovery media for manual access'. This does not particularly describe us, for various reasons; we really would like to be more resilient and recoverable than that (and we are towards that now, since we're not using 2FA or LDAP or the like).
It looks like some 2FA systems can be configured to use purely local resources on the machine, which at least gets you out of relying on some central server to be up and talking to the machine. Relying on the local 2FA software to be configured okay and working right is probably no worse than things like relying on the overall PAM configuration to be working; you can blow your own foot off either way and at least you can probably test all the behavior in advance of a crisis.
The other side of emergency system access is network access in situations where you yourself do not have both your 2FA and a system that's capable of using it. This won't be an issue in a place where all sysadmins are issued laptops that they carry around, but again that's not our situation. Closely related to this is the issue of indirect system access, where instead of logging in directly from your desktop to the fileserver you want to log in to another machine and copy something from it to the fileserver. Your desktop has access to your good 2FA device but the other machine doesn't, no more than it has access to your SSH keypairs.
(I suspect that the answer here is that you set the system up so you can authenticate either with a 2FA protected SSH keypair or with your password plus a 2FA code. Then your desktop uses the 2FA protected keypair and on other machines you fall back to password plus 2FA code. Maybe you have multiple password plus code setups, one using a 2FA fob and one using a SMS message to your cellphone.)
2015-08-05
What I want out of two factor authentication for my own use
Locally, we don't currently use two factor authentication for anything (for reasons that in large part boil down to 'no one has money to fund it'). I don't know if we can change that, but I'd like to at least move in that direction, and one part of any such move is trying to think about what I want out of a two factor authentication system. Since this is a broad area, I'm focusing on my own use today.
Right off the bat, any two factor authentication system we deploy has to be capable of being used for only certain accounts (and only for certain services, like SSH logins). The odds that we'll ever be able to deploy something universally is essentially nil; to put it crudely, no one is going to pay for two-factor authenticators for graduate students (and we can't assume that all grad students will have smartphones). Similarly it's unlikely that we'll ever be able to get all our services to be 2FA capable.
For my own use, what I care about is SSH access and what I want out of 2FA is for it to be as convenient as SSH with an unlocked keypair (while being more secure). I log in to and out of enough of our machines every day that having to enter a 2FA password or challenge every time would be really irritating. At the same time I do want to have to unlock the 2FA system somehow, since I don't want to reduce this down to 'has possession of a magic dongle'. It'd be ideal if some SSH authentication could be configured to require explicit 2FA with a password I have to type as well as whatever 'proof of object' there is (ideally more or less automated).
(Oh, and all of this needs to work from Linux clients to Linux and OmniOS servers, and ideally OpenBSD servers as well.)
Based on some Internet searching today, it seems the way many people are doing this is with a Yubikey Neo from Yubico. They're even cheap enough that we might be able to buy at least one to experiment with (it helps that they don't require expensive software licenses to go with them). If there are other reasonably popular alternatives, they don't seem to come up in my Internet searches.
(A smartphone based solution is not a good one for me because I don't have a smartphone.)