Wandering Thoughts archives

2024-04-28

How I (used to) handle keeping track of how I configured software

Once upon a time, back a reasonable while ago, I used to routinely configure (in the './configure' sense) and build a fair amount of software myself, software that periodically got updates and so needed me to rebuild it. If you've ever done this, you know that one of the annoying things about this process is keeping track of just what configuration options you built the software with, so you can re-run the configuration process as necessary (which may be on new releases of the software, but also when you do things like upgrade your system to a new version of your OS). Since I'm that kind of person, I naturally built a system to handle this for me.

How the system worked was that the actual configuration for each program or package was done by a little shell script snippet that I stored in a directory under '$HOME/lib'. Generally the file name of the snippet was the base name of the source directory I would be building in, so for example 'fvwm-cvs'. Also in this directory was a 'MAPPINGS' file that mapped from full or partial paths of the source directory to the snippet to use for that particular thing. To actually configure a program, I ran a script, inventively called 'doconfig'. Doconfig searched the MAPPINGS file for, well, let me just quote from comments in the script:

Algorithm: we have a file called MAPPINGS.
We search for first the full path of the current directory and then it with successive things sawn off the front; if we get a match, we use the filename named.
Otherwise, we try to use the basename of the directory as a file. Otherwise we error out.

There's nothing particularly special about my script and my system for keeping track of how I built software. There probably are tons of versions and variations of it that people have created for themselves over the years. This is just the sort of thing you want to do when you get tired of trying to re-read 'config.log' files or whatever, and realize that you forgot how you built the software the last time around, and so on.

(Having written this up I've realized that I should still be using it, because these days I'm building or re-building a number of things and I've slide back to the old silly ways of trying to do it all by hand.)

PS: At work we don't have any particular system for keeping track of software build instructions. Generally, if we have to build something from source, we put the relevant command lines and other information in our build instructions.

HowIHandleSoftwareBuildConfigs written at 23:26:50;

2024-04-27

Autoconf and configure features that people find valuable

In the wake of the XZ Utils backdoor, which involved GNU Autoconf, it has been popular to suggest that Autoconf should go away. Some of the people suggesting this have also been proposing that the replacement for Autoconf and the 'configure' scripts it generates be something simpler. As a system administrator who interacts with configure scripts (and autoconf) and who deals with building projects such as OpenZFS, it is my view that people proposing simpler replacements may not be seeing the features that people like me find valuable in practice.

(For this I'm setting aside the (wasteful) cost of replacing Autoconf.)

Projects such as OpenZFS and others rely on their configuration system to detect various aspects of the system they're being built on that can't simply be assumed. For OpenZFS, this includes various aspects of the (internal) kernel 'API'; for other projects, such as conserver, this covers things like whether or not the system has IPMI libraries available. As a system administrator building these projects, I want them to automatically detect all of this rather than forcing me to do it by hand to set build options (or demanding that I install all of the libraries and so on that they might possibly want to use).

As a system administrator, one large thing that I find valuable about configure is that it doesn't require me to change anything shipped with the software in order to configure the software. I can configure the software using a command line, which means that I can use various means to save and recall that command line, ranging from 'how to build this here' documentation to automated scripts.

Normal configure scripts also let me and other people set the install location for the software. This is a relatively critical feature for programs that may be installed as a Linux distribution package, as a *BSD third party package, by the local system administrator, or by an individual user putting them somewhere in their own home directory, since all four of these typically need different install locations. If a replacement configure system does not accept at least a '--prefix' argument or the equivalent, it becomes much less useful in practice.

Many GNU configure scripts also let the person configuring the software set various options for what features it will include, how it will behave by default, and so on. How much these are used varies significantly between programs (and between people building the program), but some of the time they're critical for selecting defaults and enabling (or disabling) features that not everyone wants. A replacement configure system that doesn't support build options like these is less useful for anyone who wants to build such software with non-standard options, and it may force software to drop build options entirely.

(There are some people who would say that software should not have build options any more than it should have runtime configuration settings, but this is not exactly a popular position.)

This is my list, so other people may well value other features that are supported by Autoconf and configure (for example, the ability to set C compiler flags, or that it's well supported for building RPMs).

AutoconfValuableFeatures written at 22:29:37;

2024-04-26

I wish projects would reliably use their release announcements mechanisms

Today, not for the first time, I discovered that one project that we use locally had made a new release (of one component) by updating my local copy of their Git repository and noticing that 'git pull' had fetched a new tag. Like various other projects, this project has an official channel to announce new releases of their various components; in this case, a mailing list. Sadly, the new release had not been announced on that mailing list, although other releases have been in the past.

This isn't the only project that does things like this and as a busy system administrator, I wish that they wouldn't. In some ways it's more frustrating to have an official channel for announcements and then to not use it consistently than to have no such channel and force me to rely on things like how Github based projects have a RSS feed of releases. With no channel (or a channel that never gets used), at least I know that I can't rely on it and I'm on my own. An erratic announcement channel makes me miss things.

(It may also cause me to use a release before it is completely ready. There are projects that publish tags and releases in their VCS repositories before they consider the releases to be officially released and something you should use. If I have to go to the VCS repository to find out about (some) new releases, I'm potentially going to be jumping the gun some of the time. Over the years I've built up a set of heuristics for various projects where I know that, for example, a new major release will almost always be officially announced somehow so I should wait to see that, but a point release may not get anything beyond a VCS tag.)

In today's modern Internet world, some of the projects that do this may have a different (and not well communicated) view of what their normal announcement mechanism actually is. If a project has an announcements mailing list and an online discussion forum, for example, perhaps their online forum is where they expect people to go for this sort of thing and there's a de facto policy that only major releases are sent to the mailing list. I tend not to look at such forums, so I'd be missing this sort of thing.

(Some projects may also have under-documented policies on what is worth 'bothering' people about through their documented announcements mechanism and what isn't. I wish they would announce everything, but perhaps other people disagree.)

AnnouncementListsShouldBeUsed written at 23:19:05;

2024-04-24

Pruning some things out with (GNU) find options

Suppose that you need to scan your filesystems and pass some files with specific names, ownerships, or whatever, except that you want to exclude scanning under /tmp and /var/tmp (as illustrative examples). Perhaps also you're feeding the file names to a shell script, especially in a pipeline, which means that you'd like to screen out directory and file names that have (common) problem characters in them, like spaces.

(If you can use Bash for your shell script, the latter problem can be dealt with because you can get Bash to read NUL-terminated lines that can be produced by 'find ... -print0'.)

Excluding things from 'find' results is done with find's -prune action, which is a little bit tricky to use when you want to exclude absolute paths (well okay it's a little bit tricky in general; see this SO question and answers). To start with, you're going to want to generate a list of filesystems and then scan them by absolute path:

FSES="$(... something ...)"
for fs in $FSES; do
    find "$fs" -xdev [... magic ...]
done

Starting with an absolute path to the filesystem (instead of cd'ing into the root of the filesystem and doing 'find . -xdev [...]' means that we can now use absolute paths in find's -path argument instead of ones relative to the filesystem root:

find "$fs" -xdev '(' -path /tmp -o -path /var/tmp ')' -prune -o ....

With absolute paths, we don't have to worry about what if /var or /tmp (or /var/tmp) are separate filesystems, instead of being directories on the root filesystem. Although it's hard to work out without experimentation, -xdev and -prune combine the way we want.

(If we're running 'find' on a filesystem that doesn't contain either /tmp or /var/tmp, we'll waste a bit of CPU time having 'find' evaluate those -path arguments all the time despite it never being possible for them to match. This is unimportant when compared to having a simpler, less error prone script.)

If we want to exclude paths with spaces in them, this is easily done with '-name "* *"'. If we want to get all whitespace, we need GNU Find and its '-regex' argument, documented best in "Regular Expressions" in the info documentation. Because we want to use a character class to match whitespace, we need to use one of the regular expression types that include this, so:

find "$fs" -regextype grep ... -regex '.*[[:space:]].*' ...

On the whole, 'find' is an awkward tool to use for this sort of filtering. Unfortunately it's sometimes what we turn to because our other options involve things like writing programs that consume and filter NUL-terminated file paths.

(And having 'find' skip entire directory trees is more efficient than letting it descend into them, print all their file paths, and then filtering the file paths out later.)

PS: One of the little annoyances of Unix for system administrators is that so many things in a stock Unix environment fall apart the moment people start putting odd characters in file names, unless you take extreme care and use unusual tools. This often affects sysadmins because we frequently have to deal with other people's almost arbitrary choices of file and directory names, and we may be dealing with actively malicious attackers for extra concern.

Sidebar: Reading null-terminated lines in Bash

Bash's version of the 'read' builtin supports a '-d' argument that can be used to read NUL-terminated lines:

while IFS= read -r -d '' line; do
  [ ... use "$line" ... ]
done

You still have to properly quote "$line" in every single use, especially as you're doing this because you expect your lines to (or filenames) to sometimes contain troublesome characters. You should definitely use Shellcheck and pay close attention to its warnings (they're good for you).

FindPruningThingsOut written at 22:32:26;

2024-04-16

IPMI connections have privilege levels, not just IPMI users

If you want to connect to a server's IPMI over the network, you normally need to authenticate as some IPMI user. When you set that IPMI user up, you'll give it one of three or four privilege levels; ADMINISTRATOR, OPERATOR, USER, or what I believe is rarely used, CALLBACK. For years, when I tried to set up IPMIs for things like reading sensors over the network, remote power cycling, or Serial over LAN console access, I'd make a special IPMI user for the purpose and try to give it a low privilege level, but the low privilege level basically never worked so I'd give up, grumble, and make yet another ADMINISTRATOR user. Recently I discovered that I had misunderstood what was going on, which is that both IPMI users and IPMI connections have a privilege level.

When you make an IPMI connection with, for example, ipmitool, it will ask for that connection to be at some privilege level. Generally the default privilege level that things ask for is 'ADMINISTRATOR', and it's honestly hard to blame them. As far as I know there is no standard for what operations require what privilege level; instead it's up to the server or BMC vendor to decide what level they want to require for any particular IPMI command. But everyone agrees that 'ADMINISTRATOR' is the highest level, so it's the safest to ask for as the connection privilege level; if the BMC doesn't let you do it at ADMINISTRATOR, you probably can't do it at all.

The flaw in this is that an IPMI user's privilege level constraints what privilege level you can ask for when you authenticate as that user. If you make a 'USER' privileged IPMI user, connect as it, and ask for ADMINISTRATOR privileges, the BMC is going to tell you no. Since ipmitool and other tools were always asking for ADMINISTRATOR by default, they would get errors unless I made my IPMI users have that privilege level. Once I discovered and realized this, I could explicitly tell ipmitool and other things to ask for less privilege and then work out exactly what privilege level I needed for a particular operation on a particular BMC.

(It is probably safe to assume that a 'USER' privileged IPMI user (well, connection) can read sensor data. Experimentally, at least one vendor's BMC will do Serial over LAN at 'OPERATOR' privilege, but I wouldn't be surprised if some require 'ADMINISTRATOR' for that, since serial console access is often the keys to the server itself. Hopefully power cycling the server is an 'OPERATOR' level thing, but again perhaps not on some BMCs.)

PS: If there's a way to have ipmitool and other things ask for 'whatever the (maximum) privilege level this user has', it's not obvious to me in things like the ipmitool manual page.

IPMIUsersAndPermissions written at 22:56:43;

2024-04-07

NAT'ing on the firewall versus host routes for public IPs

In a comment on my entry on solving the hairpin NAT problem with policy based routing, Arnaud Gomes suggested an alternative approach:

Since you are adding an IP address to the server anyway, why not simply add the public address to a loopback interface, add a route on the firewall and forgo the DNAT completely? In most situations this leads to a much simpler configuration.

This got me to thinking about using this approach as a general way to expose internal servers on internal networks, as an alternative to NAT'ing them on our external firewall. This approach has some conceptual advantages, including that it doesn't require NAT, but unfortunately it's probably significantly more complex in our network environment and so much less attractive than NAT'ing on the external firewall.

There are two disadvantages of the routing approach in an environment like ours. The first disadvantage is that it probably only works easily for inbound connections. If such an exposed server wants to make outgoing connections that will appear to come from its public IP, it needs to explicitly set the source IP for those connections instead of allowing the system to chose the default. Potentially you can solve this on the external firewall by NAT'ing outgoing connections to its public IP, but then things are getting complicated, since you can have two machines generating traffic with the same IP.

The second disadvantage is that we'd have to establish and maintain a collection of host source routes in multiple places. Our core router would need the routes, the routing firewall each such machine was behind would need to have the route, and probably we'd want other machines and firewalls to also have these host routes. And every time we added, removed, or changed such a machine we'd have to update these routes. We especially don't like to frequently update our core router precisely because it is our core router.

The advantage of doing bidirectional NAT on our external firewall for these machines is the reverse of these issues. There's only one place in our entire network really has to know about which internal machine is which public IP. Of course this leaves us with the hairpin NAT problem and needing split horizon DNS, but those are broadly considered solved problems, unlike maintaining a set of host routes.

On the other hand, if we already had a full infrastructure for maintaining and updating routing tables, the non-NAT approach might be easy and natural. I can imagine an environment where you propagate route announcements through your network so that everyone can automatically track and know where certain public IPs are. We'd still need firewall rules to allow only certain sorts of traffic in, though.

FirewallNATVsHostRoutes written at 22:38:19;

2024-04-02

An issue with Alertmanager inhibitions and resolved alerts

Prometheus Alertmanager has a feature called inhibitions, where one alert can inhibit other alerts. We use this in a number of situations, such as our special 'there is a large scale problem' alert inhibiting other alerts and some others. Recently I realized that there is a complication in how inhibitions interact with being notified about resolved alerts (due to this mailing list thread).

Suppose that you have an inhibition rule to the effect that alert A ('this host is down') inhibits alert B ('this special host daemon is down'), and you send notifications on resolved alerts. With alert A in effect, every time Alertmanager goes to send out a notification for the alert group that alert B is part of, Alertmanager will see that alert B is inhibited and filter it out (as far as I can tell this is the basic effect of Alertmanager silences, inhibitions, and mutes). Such notifications will (potentially) happen on every group_interval tick.

Now suppose that both alert A and alert B resolve at more or less the same time (because the host is back up along with its special daemon). Alertmanager doesn't immediately send notifications for resolved alerts; instead, just like all other alert group re-notifications, they wait for the next group_interval tick. When this tick happens, alert B will be a resolved alert that you should normally be notified about, and alert A will no longer be active and so no longer inhibiting it. You'll receive a potentially surprising notification about the now-resolved alert B, even though it was previously inhibited while it was active (and so you may never have received an initial notification that it was active).

(Although I described it as both alerts resolving around the same time, it doesn't have to be that way; alert A might have ended later than B, with some hand-waving and uncertainty. The necessary condition is for alert A and its inhibition to no longer be in effect when Alertmanager is about to process a notification that includes alert B's resolution.)

The consequence of this is that if you want inhibitions to reliably suppress notification about resolved alerts, you need the inhibiting alert to be active at least one group_interval longer than the alerts it's inhibiting. In some cases this is easy to arrange, but in other cases it may be troublesome and so you may want to simply live with the extra notifications about resolved alerts.

(The longer your 'group_interval' setting is, the worse this gets, but there are a number of reasons you probably want group_interval to be relatively short, including prompt notifications about resolved alerts under normal circumstances.)

AlertmanagerInhibitionsGotcha written at 23:02:40;

What Prometheus Alertmanager's group_interval setting means

One of the configuration settings in Prometheus Alertmanager for 'routes' is the alert group interval, the 'group_interval' setting. The Alertmanager configuration describes the setting this way:

How long to wait before sending a notification about new alerts that are added to a group of alerts for which an initial notification has already been sent.

As has come up before more than once, this is not actually accurate. The group interval is not a (minimum) delay; it is instead a timer that ticks every so often (a ticker). If you have group_interval set to five minutes, Alertmanager will potentially send another notification only at every five minute interval after the first notification (what I'll call a tick). If the initial notification happened at 12:10, the first re-notification might happen at 12:15, and then at 12:20, and then at 12:25, and so on.

(The timing of these ticks is based purely on when the first notification for an alert group is sent, so usually they will not be so neatly lined up with the clock.)

If a new alert (or a resolved alert) misses the group_interval tick by even a second, a notification including it won't go out until the next tick. If the initial alert group notification happened at 12:10 and then nothing changed until a new alert was raised at 12:31, Alertmanager will not send another notification until the group_interval tick at 12:35, even though it's been much more than five minutes since the last notification.

This gives you an unfortunate tradeoff between prompt notification of additional alerts in an alert group (or of alerts being resolved) and not receiving a horde of notifications. If you want to receive a prompt notification, you need a short group_interval, but then you can receive a stream of notifications as alert after alert after alert pops up one by one. It would be nicer if Alertmanager didn't have this group_interval tick behavior but would instead treat it as a minimum delay between successive notifications, but I don't expect Alertmanager to change at this point.

(I've written all of this down before in various entries, so this is mostly to have a single entry I can link to in the future when group_interval comes up.)

AlertmanagerGroupInterval written at 20:43:46;

2024-04-01

The power of being able to query your servers for unpredictable things

Today, for reasons beyond the scope of this entry, we wanted to find out how much disk space /var/log/amanda was using on all of our servers. We have a quite capable metrics system that captures the amount of space filesystems are using (among many other things), but /var/log/amanda wasn't covered by this because it wasn't a separate filesystem; instead it was just one directory tree in either the root filesystem (on most servers) or the /var filesystem (on a few fileservers that have a separate /var). Fortunately we don't have too many servers in our fleet and we have a set of tools to run commands across all of them, so answering our question was pretty simple.

This isn't the first time we've wanted to know some random thing about some or all of our servers, and it won't be the last time. The reality of life is that routine monitoring can't possibly capture every fact you'll ever want to know, and you shouldn't even try to make it do so (among other issues, you'd be collecting far too much information). Sooner or later you're going to need to get nearly arbitrary information from your servers, using some mechanism.

This mechanism doesn't necessarily need to be SSH, and it doesn't even need to involve connecting to servers, depending in part on how many of them you have. Perhaps you'll normally do it by peering inside one of your immutable system images to answer questions about it. But on a moderate scale my feeling is that 'run a command on some or all of our machines and give me the output' is the basic primitive you're going to wind up wanting, partly because it's so flexible.

(One advantage of using SSH for this is that SSH has a mature, well understood and thoroughly hardened authentication and access control system. Other methods of what are fundamentally remote command or code execution may not be so solid and trustworthy. And if you want to, you can aggressively constrain what a SSH connection can do through additional measures like forcing it to run in a captive environment that only permits certain things.)

PS: The direct answer is that on everything except our Amanda backup servers, /var/log/amanda is at most 20 Mbytes or so, and often a lot less. After the Amanda servers, our fileservers have the largest amount of data there. In our environment, this directory tree is only used for what are basically debugging logs, and I believe that on clients, the amount of debugging logs you wind up with scales with the number of filesystems you're dealing with.

QueryingServersForRandomThings written at 23:04:41;


Page tools: See As Normal.
Search:
Login: Password:

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.