Wandering Thoughts

2015-03-03

The latest xterm versions mangle $SHELL in annoying ways

As of patch #301 (and with changes since then), the canonical version of xterm has some unfortunate behavior changes surrounding the $SHELL environment variable and how xterm interacts with it. The full details are in the xterm manpage in the OPTIONS section, but the summary is that xterm now clears or changes $SHELL if the $SHELL value is not in /etc/shells, and sometimes even if it is. As far as I can tell, the decision tree goes like this:

  1. if xterm is (explicitly) running something that is in /etc/shells (as 'xterm /some/thing', not 'xterm -e /some/thing'), $SHELL will be rewritten to that thing.

  2. if xterm is running anything (including running $SHELL itself via being invoked as just 'xterm') and $SHELL is not in /etc/shells but your login shell is, $SHELL will be reset to your login shell.

  3. otherwise $SHELL will be removed from the environment, resulting in a shell environment with $SHELL unset. This happens even if you run plain 'xterm' and so xterm is running $SHELL.

It is difficult for me to summarize concisely how wrong this is and how many ways it can cause problems. For a start, this is a misuse of /etc/shells, per my entry on what it is and isn't; /etc/shells is in no way a complete list of all of the shells (or all of the good shells) that are in use on the system. You cannot validate the contents of $SHELL against /etc/shells because that is not what /etc/shells is there for.

This xterm change causes significant problems for anyone with their shell set to something that is not in /etc/shells, anyone using an alternate personal shell (which is not in /etc/shells for obvious reasons), any program that assumes $SHELL is always set (historically a safe assumption), and any environment that assumes $SHELL is not reset when set to something non-standard such as a captive or special purpose 'shell'.

(Not all versions of chsh restrict you to what's in /etc/shells, for that matter; some will let you set other things if you really ask them to.)

If you fall into one or more of these categories and you use xterm, you're going to need to change your environment at some point. Unfortunately it seems unlikely that this change will be reverted, so if your version of Unix updates xterm at all you're going to have it sooner or later (so far only a few Linux distributions are recent enough to have it).

PS: Perhaps this should be my cue to switch to urxvt. However my almost-default configuration of it is still just enough different from xterm to be irritating for me, although maybe I could fix that with enough customization work. For example, I really want its double-click selection behavior to exactly match xterm because that's what my reflexes expect and demand by now. See also.

PPS: Yes, I do get quite irritated at abrupt incompatible changes in the behavior of long-standing Unix programs, at least when they affect me.

unix/XTermSHELLMangling written at 00:08:20; Add Comment

2015-03-02

My view of the difference between 'pets' and 'cattle'

A few weeks ago I wrote about how all of our important machines are pets. When I did that I did not strongly define how I view the difference between pets and cattle, partly because I thought it was obvious. Subsequent commentary in various places showed me that I was wrong about this, so now I'm going to nail things down.

To me the core distinction is not in whether you hand-build machines or have them automatically configured. Obviously when you have a large herd of cattle you cannot hand-build them, but equally obviously the current best practice is to use automated setups even for one-off machines and in small environments. Instead the real distinction is how much you care about each individual machine. In the cattle approach, any individual machine is more or less expendable. Does it have problems? Your default answer is to shoot it and start a new one (which your build automation and scaling systems should make easy). In the pet approach each individual machine is precious; if it has problems you attempt to nurse it back to health, just as you would with a loved pet, and building a new one is only a last resort even if your automation means that you can do this rapidly.

If you don't have build automation and so on, replacing any machine is a time consuming thing so you wind up with pets by default. But even if you do have fast automated builds, you can still have pets due to things like them having local state of some sort. Sure, you have backups and so on of that state, but you go to hand care because restoring a machine to full service is slower than a plain rebuild to get the software up.

(This view of pets versus cattle is supported by, eg, the discussion here. The author of that email clearly sees the distinction not in how machines are created but in significant part in how machines with problems are treated. If machines are expendable, you have cattle.)

It's my feeling that there are any number of situations where you will naturally wind up with a pet model unless you're operating at a very big scale, but that's another entry.

sysadmin/PetsVsCattleDifference written at 00:09:17; Add Comment

2015-02-28

Sometimes why we have singleton machines is that failover is hard

One of our single points of failure around here is that we have any number of singleton machines that provide important service, for example DHCP for some of our most important user networks. We build such machines with amenities like mirrored system disks and we can put together a new instance in an hour or so (most of which just goes to copying things to the local disk), but that still means some amount of downtime in the event of a total failure. So why don't we build redundant systems for these things?

One reason is that there's a lot of services where failover and what I'll call 'cohabitation' is not easy. On the really easy side is something like caching DNS servers; it's easy to have two on the network at once and most clients can be configured to talk to both of them. If the first one goes down there will be some amount of inconvenience, but most everyone will wind up talking to the second one without anyone having to do anything. On the difficult side is something like a DHCP server with continually updated DHCP registration. You can't really have two active DHCP servers on the network at once, plus the backup one needs to be continually updated from the master. Switching from one DHCP server to the other requires doing something active, either by hand or through automation (and automation has hazards, like accidental or incomplete failover).

(In the specific case of DHCP you can make this easier with more automation, but then you have custom automation. Other services, like IMAP, are much less tractable for various reasons, although in some ways they're very easy if you're willing to tell users 'in an emergency change the IMAP server name to imap2.cs'.)

Of course this is kind of an excuse. Having a prebuilt second server for many of these things would speed up bringing the service back if the worst came to the worst, even if it took manual intervention. But it's a tradeoff here; prebuilding second servers would require more servers and at least partially complicate how we administer things. It's simpler if we don't wrestle with this and so far our servers have been reliable enough that I can't remember any failures.

(This reliability is important. Building a second server is in a sense a gamble; you're investing up-front effort in the hopes that it will pay off in the future. If there is no payoff because you never need the second server, your effort turns into pure overhead and you may wind up feeling stupid.)

Another part of this is that I think we simply haven't considered building second servers for most of these roles; we've never sat down to consider the pros and cons, to evaluate how many extra servers it would take, to figure out how critical some of these pieces of infrastructure really are, and so on. Some of our passive decisions here were undoubtedly formed at a time when how our networks were used looked different than it does now.

(Eg, it used to be the case that many fewer people brought in their own devices than today; the natural result of this is that a working 'laptop' network is now much more important than before. Similar things probably apply to our wireless network infrastructure, although somewhat less so since users have alternatives in an emergency (such as the campus-wide wireless network).)

sysadmin/SingletonFailoverProblem written at 23:40:14; Add Comment

2015-02-27

Email from generic word domains is usually advance fee fraud spam

One of the patterns I've observed in the email sent to my sinkhole SMTP server is what I'll call the 'generic word domain' one. Pretty much any email that is from an address at any generic word domain (such as 'accountant.com', 'client.com', 'online.com', or 'lawyer.com') is an advance fee fraud spam. It isn't sent from or associated with the actual servers involved in the domain (if there's anything more than a parking web page full of ads), it's just that advanced fee fraud spammers seem to really like using those domains as their MAIL FROM addresses and often (although not always) the 'From:' in their message.

Advance fee fraud spammers use other addresses, of course, and I haven't done enough of a study to see if my collection of them prefers generic nouns, other addresses (eg various free email providers), or just whatever address is attached to the account or email server they're exploiting to send out their spam. I was going to say that I'd seen only a tiny bit of phish spam that used this sort of domain name, but it turns out that a recent cluster of phish spam follows this pattern (using addresses like 'suspension@failure.com', 'product@client.com', and 'nfsid@nice.com').

I assume that advance fee fraud spammers are doing this to make their spam sound more official and real, just as they like to borrow the domains of things associated with the particular variant of the scam they're using (eg a spam from someone who claims to be a UN staff member may well be sent from a UN-related domain, or at least from something that sounds like it). I expect that the owners of most of these 'generic word' domains are just using them to collect ad revenues, not email, and so don't particularly care about the email being sent 'from' them.

(Although I did discover while researching this that 'nice.com' is a real company that may even send email on occasion, rather to my surprise. I suspect that they bought their domain name from the original squatter.)

(This elaborates on a tweet of mine, and is something that I've been noticing for many years.)

spam/GenericWordDomainSpam written at 23:16:08; Add Comment

What limits how fast we can install machines

Every so often I read about people talking about how fast they can get new machines installed and operational, generally in the context of how some system management framework or another accelerates the whole process. This has always kind of amused me, not because our install process is particularly fast but instead because of why it's not so fast:

The limit on how fast we install machines is how fast they can unpack packages to the local disk.

That's what takes almost all of the time; fetching (from a local mirror or the install media) and then unpacking a variegated pile of Ubuntu packages. A good part of this is the media speed of the install media, some of this is write speed to the system's disks, and some of this is all of the fiddling around that dpkg does in the process of installing packages, running postinstall scripts, and so on. The same thing is true of installing CentOS machines, OmniOS machines, and so on; almost all of the time is in the system installer and packaging system. What framework we wrap around this doesn't matter because we spend almost no time in said framework or doing things by hand.

The immediate corollary to this is that the only way to make any of our installs go much faster would be to do less work, ranging from installing fewer packages to drastic approaches where we reduce our 'package installs' towards 'unpack a tarball' (which would minimize package manager overhead). There are probably ways to do approach this, but again they have relatively little to do with what system install framework we use.

(I think part of the slowness is simply package manager overhead instead of raw disk IO speed limits. But this is inescapable unless we somehow jettison the package manager entirely.)

Sidebar: an illustration of how media speeds matter

Over time I've observed that both installs in my testing virtual machines and installs using the virtual DVDs provided by many KVM over IP management processors are clearly faster than installs done from real physical DVDs plugged into the machine. I've always assumed that this is because reading a DVD image from my local disk is faster than doing it from a real DVD drive (even including any KVM over IP virtual device network overhead).

sysadmin/InstallSpeedConstraint written at 01:24:39; Add Comment

2015-02-25

My current issues with systemd's networkd in Fedora 21

On the whole I'm happy with my switch to systemd-networkd, which I made for reasons covered here; my networking works and my workstation boots faster. But right now there are some downsides and limitations to networkd, and in the interests of equal time for the not so great bits I feel like running down them today. I covered some initial issues in my detailed setup entry; the largest one is that there is no syntax checker for the networkd configuration files and networkd itself doesn't report anything to the console if there are problems. Beyond that we get into a collection of operational issues.

What I consider the largest issue with networkd right now is that it's a daemon (as opposed to something that runs once and stops) but there is no documented way of interacting with it while it's running. There are two or three sides to this: information, temporary manipulation, and large changes. On the information front, networkd exposes no good way to introspect its full running state, including what network devices it's doing what to, or to wait for it to complete certain operations. On the temporary manipulation front, there's no way I know of to tell networkd to temporarily take down something and then later bring it back (the equivalent of ifdown and ifup). Perhaps you're supposed to do those with manual commands outside of networkd. Finally, on more permanent changes, if you add or remove or modify a configuration file in /etc/systemd/network and want networkd to notice, well, I don't know how you do that. Perhaps you restart networkd; perhaps you shut networkd down, modify things, and restart it; perhaps you reboot your machine. Perhaps networkd notices some changes on its own.

(Okay, it turns out that there's a networkctl command that queries some information from networkd, although it's not actually documented in the Fedora 21 version of systemd. This still doesn't allow you to poke networkd to do various operations.)

This points to a broader issue: there's a lot about networkd that's awfully underdocumented. I should not have to wonder about how to get networkd to notice configuration file updates; the documentation should tell me one way or another. As I write this the current systemd 219 systemd-networkd manpage is a marvel of saying very litte, and there's also omissions and lack of clarity in the manpages for the actual configuration files. All told networkd's documentation is not up to the generally good systemd standards.

The next issue is that networkd has forgotten everything that systemd learned about the difference between present configuration files and active configuration files. To networkd those are one and the same; if you have a file in /etc/systemd/network, it is live. Want it not to be live? Better move it out of the directory (or edit it, although there is no explicit 'this is disabled' option you can set). Want to override something in /usr/lib/systemd/network? I'm honestly not sure how you'd do that short of removing it or editing it. This is an unfortunate step backwards.

(It's also a problem in some situations where you have multiple configurations for a particular port that you want to swap between. In Fedora's static configuration world you can have multiple ifcfg-* files, all with ONBOOT=no, and then ifup and ifdown them as you need them; there is no networkd equivalent.)

I'm not going to count networkd's lack of general support for 'wait for specific thing <X> to happen' as an issue. But it certainly would be nice if systemd-networkd-wait-online was more generic and so could be more easily reused for various things.

I do think (as mentioned) that some of networkd's device and link configuration is unnecessarily tedious and repetitive. I see why it happened, but it's the easy way instead of the best way. I hope that it can be improved and I think that it can be. In theory I think you could go as far as optionally merging .link files with .network files to cover many cases much simpler, as the sections in each file today basically don't clash with each other.

In general I certainly hope that all of these issues will get better over time, although some of them will inevitably make networkd more complicated. Systemd's network configuration support is relatively young and I'm willing to accept some rough edges under the circumstances. I even sort of accept that networkd's priority right now probably needs to be supporting more types of networking instead of improving the administration experience, even if it doesn't make me entirely happy (but I'm biased, as my needs are already met there).

(To emphasize, my networkd issues are as of the state of networkd in Fedora 21, which has systemd 216, with a little bit of peeking at the latest systemd 219 documentation. In a year the situation may look a lot different, and I sure hope it does.)

linux/SystemdNetworkdFlaws written at 23:04:06; Add Comment

My Linux container temptation: running other Linuxes

We use a very important piece of (commercial) software that is only supported on Ubuntu 10.04 and RHEL/CentOS 6, not anything later (and it definitely doesn't work on Ubuntu 12.04, we've tried that). It's currently on a 10.04 machine but 10.04 is going to go out of support quite soon. The obvious alternative is to build a RHEL 6 machine, except I don't really like RHEL 6 and it would be our sole RHEL 6 host (well, CentOS 6 host, same thing). All of this has led me to a temptation, namely Linux containers. Specifically, using Linux containers to run one Linux as the host operating system (such as Ubuntu 14.04) while providing a different Linux to this software.

(In theory Linux containers are sort of overkill and you could do most or all of what we need in a chroot install of CentOS 6. In practice it's probably easier and surer to set up an actual container.)

Note that I specifically don't want something like Docker, because the Docker model of application containers doesn't fit how the software natively works; it expects an environment with cron and multiple processes and persistent log files it writes locally and so on and so forth. I just want to provide the program with the CentOS 6 environment it needs to not crash without having to install or actually administer a CentOS 6 machine more than a tiny bit.

Ubuntu 14.04 has explicit support for LXC with documentation and appears to support CentOS containers, so that's the obvious way to go for this. It's certainly a tempting idea; I could play with some interesting new technology while getting out of dealing with a Linux that I don't like.

On the other hand, is it a good idea? This is certainly a lot of work to go to in order to avoid most of running a CentOS 6 machine (I think we'd still need to watch for eg CentOS glibc security updates and apply them). Unless we make more use of containers later, it would also leave us with a unique and peculiar one-off system that'll require special steps to administer. And virtualization has failed here before.

(I'd feel more enthused about this if I thought we had additional good uses for containers, but I don't see any other ones right now.)

linux/ContainerOtherLinuxTemptation written at 01:39:40; Add Comment

2015-02-24

How we do and document machine builds

I've written before about our general Ubuntu install system and I've mentioned before that we have documented build procedures but we don't really automate them. But I've never discussed how we do reproducible builds and so on. Basically we do them by hand, but we do them systematically.

Our Ubuntu login and compute servers are essentially entirely built through our standard install system. For everything else, the first step is a base install with the same system. As part of this base install we make some initial choices, like what sort of NFS mounts this machine will have (all of them, only our central administrative filesystem, etc).

After the base install we have a set of documented additional steps; almost all of these steps are either installing additional packages or copying configuration files from that central filesystem. We try to make these steps basically cut and paste, often with the literal commands to run interlaced with an explanation of what they do. An example is:

* install our Dovecot config files:
     cd /etc/dovecot/conf.d/
     rsync -a /cs/site/machines/aviary/etc/dovecot/conf.d/*.conf .

Typically we do all of this over a SSH connection, so we are literally cutting and pasting from the setup documentation to the machine.

(In theory we have a system for automatically installing additional Ubuntu packages only on specific systems. In practice there are all sorts of reasons that this has wound up relatively disused; for example it's tied to the hostname of what's being installed and we often install new versions of a machine under a different hostname. Since machines rarely have that many additional packages installed, we've moved away from preconfigured packages in favour of explicitly saying 'install these packages'.)

We aren't neurotic about doing everything with cut and paste; sometimes it's easier to describe an edit to do to a configuration file in prose rather than to try to write commands to do it automatically (especially since those are usually not simple). There can also be steps like 'recover the DHCP files from backups or copy them from the machine you're migrating from', which require a bit of hand attention and decisions based on the specific situation you're in.

(This setup documentation is also a good place to discuss general issues with the machine, even if it's not strictly build instructions.)

When we build non-Ubuntu machines the build instructions usually follow a very similar form: we start with 'do a standard base install of <OS>' and then we document the specific customizations for the machine or type of machine; this is what we do for our OpenBSD firewalls and our CentOS based iSCSI backends. Setup of our OmniOS fileservers is sufficiently complicated and picky that a bunch of it is delegated to a couple of scripts. There's still a fair number of by-hand commands, though.

In theory we could turn any continuous run of cut and paste commands into a shell script; for most machines this would probably cover at least 90% of the install. Despite what I've written in the past, doing so would have various modest advantages; for example, it would make sure that we would never skip a step by accident. I don't have a simple reason for why we don't do it except 'it's never seemed like that much of an issue', given that we build and rebuild this sort of machine very infrequently (generally we build them once every Ubuntu version or every other Ubuntu version, as our servers generally haven't failed).

(I think part of the issue is that it would be a lot of work to get a completely hands-off install for a number of machines, per my old entry on this. Many machines have one or two little bits that aren't just running cut & paste commands, which means that a simple script can't cover all of the install.)

sysadmin/OurBuildProcedures written at 02:19:52; Add Comment

2015-02-23

In shell programming, I should be more willing to write custom tools

One of the very strong temptations in Unix shell programming is to use and abuse existing programs in order to get things done, rather than going to the hassle of writing your own custom tool to do just what you want. I don't want to say that this is wrong, exactly, but it does have its limits; in a variant of the general shell programming Turing tar pit, you can spend a lot of time banging your head against those limits or you can just write something that is specific to your problem and so does what you want. I have a bias against writing my own custom tools, for various reasons, but this bias is probably too strong.

All of that sounds really abstract, so let me get concrete about the case that sparked this thought. I have a shell script that decides what to do with URLs that I click on in my Twitter client, which is not as simple as 'hand them to my browser' for various reasons. As part of this script I want to reach through the HTTP redirections imposed by the various levels of URL shorteners that people use on Twitter.

If you want to get HTTP redirections on a generic Unix system with existing tools, the best way I know of to do this is to abuse curl along with some other things:

curl -siI "$URL" | grep -i '^location:' | awk '{print $2}' | tr -d '\r'

Put plainly, this is a hack. We aren't actually getting the redirection as such; we're getting curl to make a request that should only have headers, dumping the headers, and then trying to pick out the HTTP redirection header. We aren't verifying that we actually got a HTTP redirect status code and I think that the server could do wacky things with the Location: header as well, and we certainly aren't verifying that the server only gave us headers. Bits of this incantation evolved over time as I ran into limitations in it; both the case-independent grep and the entire tr were later additions to cope with unusual servers. The final nail here is that curl on Fedora 21 has problems talking to CloudFlare HTTPS sites and that affects some specialized URL shorteners I want to strip redirections from.

(You might think that servers will never include content bodies with HEAD replies, but from personal experience I can say that a very similar mistake is quite easy to make in a custom framework.)

The right solution here is to stop torturing curl and to get or write a specialized tool to do the job I want. This tool would specifically check that we got a HTTP redirection and then output only the target URL from the redirect. Any language with a modern HTTP framework should make this easy and fast to write; I'd probably use Go just because.

(In theory I could also use a more focused 'make HTTP request and extract specific header <X>' tool to do this job. I don't know if any exist.)

Why didn't I write a custom tool when I started, or at least when I started running into issues with curl? Because each time it seemed like less work to use existing tools and hack things up a bit instead of going all the way to writing my own. That's one of the temptations of a Turing tar pit; every step into the tar can feel small and oh so reasonable, and only at the end do you realize that you're well and truly mired.

(Yes, there are drawbacks to writing custom tools instead of bending standard ones to your needs. That's something for another entry, though.)

PS: writing custom tools that do exactly what you want and what your script needs also has the side effect of making your scripts clearer, because there is less code that's just there to wrap and manipulate the core standard tool. Three of the four commands in that 'get a redirection' pipeline are just there to fix up curl's output, after all.

programming/WriteCustomToolsForScripts written at 01:52:07; Add Comment

2015-02-22

Unsurprisingly, random SMTP servers do get open relay probes

One of the things I do with my sinkhole smtp server is run a copy of it on my home machine. Unlike my office workstation, my home machine has never been an active mail machine; it has nothing pointing to it and no history of various (pseudo)email addresses that attract spam. Under normal circumstances there should be absolutely no one with any reason to connect to it.

Indeed, it doesn't attempts to send me any email (spammers might plausibly try, say, postmaster@<machine>). What it does get is a certain amount of open relay probes. Originally these probes were sent with outside MAIL FROM:s (and outside RCPT TOs, obviously), but lately they've been forged to come from various addresses at the machine's overall domain.

(What's actually pretty interesting about this is that the overall domain isn't valid for email; it has neither an A nor an MX entry, and never has. The spammers are just assuming that, eg, 'support@<domain>' is a valid address and using it as the MAIL FROM.)

It used to be that the relay probes made one or two attempts and then stopped. The recent run of relay probes has dumped a whole series of email on my machine all at once, varying at most the MAIL FROM address; I assume it's trying to see if some will go through where others fail. At the moment addresses on GMail appear to be the popular collection point for results. The Subject lines of recent relay attempts clearly contain tracing information and suggest that the software involved is normally used against things that require SMTP AUTH, as it seems to be including passwords in the Subject: information.

The exact details and mechanisms have changed from earlier attempts and will undoubtedly change again in the future. What's really interesting is two things: people really do scan more or less random addresses in an attempt to find open SMTP relays, and when they find something they don't immediately start trying to shovel spam through it but instead attempt to verify that it actually is open.

(Some days I'm tempted to manually 'relay' one of these messages to its collection point just to see if there would be a future attempt to spam through my machine. But so far that's far too much work and probably a certain amount of risk.)

spam/SMTPServersGetRelayProbes written at 03:04:23; Add Comment

(Previous 10 or go back to February 2015 at 2015/02/21)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.