Wandering Thoughts archives

2019-06-14

Intel's MDS issues have now made some old servers almost completely useless to us

Over on Twitter, I said:

One unpleasant effect of MDS is that old Intel-based machines (ones with CPUs that will not get microcode updates) are now effectively useless to us, unlike before, because it's been decided that the security risks are too high for almost everything we use machines for.

If Intel releases all of the MDS microcode updates they've promised to do (sometime), this will have only a small impact on our available servers. If they decide not to update some older CPUs they're currently promising updates for, we could lose a significant number of servers.

To have MDS mitigated at all on Intel CPUs, you need updated microcode (and to turn off hyperthreading, and an updated kernel; see eg Ubuntu's MDS page). According to people who have carefully gone through the CPU lists available in PDFs that are linked from Intel's MDS advisory page, Intel is definitely not releasing microcode updates for anything older than the 2nd generation 'Core' "Sandy Bridge" architecture, which started appearing in 2011 or so (so in theory my 2011 home machine could receive a microcode update and be fixed against MDS). In theory they are going to release microcode updates for everything since then.

For reasons beyond the scope of this entry, we have decided that we have almost no roles where we are comfortable deploying an unpatchable machine that is vulnerable to MDS. In normal places this might not be an issue, since they would long since have turned over old server hardware. We are not a normal place; we run hardware into the ground, which means that we often have very old servers. For example, up until we reached this decision about MDS, we were still using Dell 1950s as scratch and test machines.

In theory, the direct effects of MDS on our stock of live servers are modest but present; we're losing a few machines, including one quite nice one (a Westmere Xeon with 512 GB of RAM), and some machines that were on the shelf are now junk. However, at the moment we have a significant number of servers that are based around Sandy Bridge Xeons. Intel has promised microcode updates for these CPUs but has not yet delivered them. If Intel never delivers updated microcode, we'll lose a significant number of machines and pretty much decimate the compute infrastructure we provide to the department.

(A great deal of our current compute machines are donated Dell C6220 blades with Xeon E5-2680s, specifically CPUID 206D7 ones. Don't ask how much work it took to get the raw CPUID values that Intel puts in their PDF.)

If further significant MDS related attacks get discovered and Intel is more restricted with microcode updates, this situation will get worse. We have a significant number of in-service Dell R210 IIs and R310 IIs, and they are almost as old as these C6220s (although some of them have Ivy Bridge generation CPUs instead of Sandy Bridge ones). Losing these would really hurt.

(In general we are not very happy with Intel right now, and we are starting to deploy AMD-based machines where we can. I would be happy if someone started offering decent basic 1U or 2U AMD-based servers at competitive prices.)

IntelMDSKillsOldServers written at 01:13:04; Add Comment

2019-06-10

Keeping your past checklists doesn't help unless you can find them again

I mentioned recently that the one remaining filesystem we need to migrate from our remaining OmniOS fileserver to our Linux fileservers is our central administrative filesystem. This filesystem is a challenge to move because everything depends on it and a number of systems are tangled into it (for example, our fileservers all rsync a partial copy from its master fileserver, which means that they need to know and agree which is the master fileserver and the master fileserver had better not try to replicate a copy from itself). With so many things depending on this filesystem and it so tangled into our systems, we clearly needed to carefully consider and plan the migration.

Of course, this is not the first time we've moved this filesystem between ZFS fileservers; we did it before when we moved from our first generation Solaris fileservers to our our OmniOS fileservers. Since I'm a big fan of checklists and I have theoretically learned a lesson about explicitly saving a copy after things are over, I was pretty sure that I had written up a checklist for the first migration and then kept it. Somewhere. In my large, assorted collection of files, in several directories, under some name.

As you might expect from how I wrote that, my initial attempts to find a copy of our 2015-ish checklist for this migration did not go well. I looked in our worklog system, where ideally it should have been mailed in after the migration was done, and I hunted around in several areas where I would have expected to keep a personal copy. Nothing came up in the searches that I attempted, and I found myself wondering if we had even done a checklist (as crazy as that seemed). If we had done one, it seemed we had lost it since then.

Today, freshly back from vacation, I resorted to a last ditch attempt at brute force. I used 'zpool history' to determine that we had probably migrated the filesystem in mid-February of 2015, and then I went back to the archives of both our worklog system and our sysadmin mailing list (where we coordinate and communicate with each other), and at least scanned through everything from the start of that February. This finally turned up an email with the checklist (in our sysadmin mailing list archives, which fortunately we keep), and once I had that, I could re-search all of my files for a tell-tale phrase from it. And there the checklist was, in a file called 'work/Old/cssite-mail-migration'.

(It had that name because back in 2015 we migrated a bunch of administrative filesystems all at once at the end of things, including our central /var/mail. This time around, we migrated /var/mail very early because we knew 10G instead of 1G would make a real difference for it.)

Sadly, I'm not sure right now how I could have done much better than this round-about way. Explicitly sending the checklist to our worklog system would have helped a bit, but even then I would have had to stumble on the right search terms to find it. Both taxonomy and searching are hard (human) problems; with my searching, I was basically playing a guessing game about what specific terms, commands, or whatever would have been used in the checklist, and evidently I guessed wrong. One possible improvement might be to make a storage directory specifically for checklists, which would at least narrow my searching down (there are a lot of things in 'work/Old').

(Things like the name of the filesystem and 'migration' are not useful, because it shows up in every filesystem migration we do since it's where we put central data about NFS mounts and so on.)

FindableChecklists written at 21:54:30; Add Comment

2019-06-08

Our current approach for updating things like build instructions

At work here, we have a strong culture of documenting everything we do in email, in something that we call 'worklogs'. Worklog messages are sent to everyone in my group, and they are also stored in private, searchable web archive. We also have systematic build instructions for our systems, and unsurprisingly they are also worklog messages. However, they are unusual worklog messages, because they are not only what was done but also what you should do to recreate the system. This means that when we modify a system covered by build instructions, we have to update these build instructions and re-mail them to the worklog system.

For a long time, that was literally how things worked. If and when you modified such a system, part of your job was to go to the worklog archives, search through them, find the most recent build instructions for the system, make a copy, modify the copy, and then mail it back in. If you were making a modification that you weren't sure was final or that we'd want to keep, you had to make an additional note to do this whole update process when the dust settled. If you were in a rush or had other things to do too or weren't certain, it was pretty tempting to postpone all of this work until some convenient later time. Sometimes that didn't happen, or at least didn't happen before a co-worker also modified the machine (with various sorts of confusion possible in the aftermath). A cautious person who wanted to build a copy of a machine for a new Ubuntu version would invariably wind up trawling through our worklog archive to check for additional changes on top of the latest build instructions, and then perhaps have to sort out if we wanted to keep some of them.

At one point, all of this reached a critical mass and we decided that something had to change; build instructions needed to be more reliably up to date. We decided to make a simple change to enable more easy updates; we would commit to keeping the current copy of every set of build instructions in a file in a known spot in our central administrative filesystem, as well as mailing them to the worklog system. That way, we could cut a number of steps off the update process to reduce the friction involved; rather than hunting for the latest version, you could just go edit it, commit it to RCS, and then mail it in to the worklog system.

(The version in the worklog system remains the canonical reference, in part because my co-workers keep printed out copies of the build instructions for various critical systems. An update doesn't really exist until it's been emailed in.)

This modest change is now a couple of years old and I think it's safe to say that it's been a smashing success. Our build instructions are now almost always up to date and it takes much less work to keep them that way. What was a pain in the rear before is now only a couple of minutes of work, often quite simple work. In the common case, you can copy the commands necessary from your existing email message about 'I made the following change to system X', since we always write those when we make changes. As an additional benefit, we don't have to worry about line-wrapping and other mangling happening when we copy email messages around and cut & paste them from the web archive system and so on; the 'real' build instructions live in a text file and never get mangled by any mail-related thing.

In general and in theory I know the power of small changes that reduce friction, but pretty much every time I run into one in practice it surprises me all over again. This is one of the times; I certainly hoped for a change and an improvement, but I didn't have any real idea how large of one it would be.

PS: We also have various 'how-tos' and 'how-this-works' and so on documentation that we keep in the same directory and update in the same way. Basically, any email in our worklog archive that serves as the canonical instructions or explanations for something is a candidate to be enrolled in this system, not just system build instructions.

UpdatingDocumentationApproach written at 00:18:07; Add Comment

2019-06-02

Exploring the start time of Prometheus alerts via ALERTS_FOR_STATE

In Prometheus, active alerts are exposed through two metrics, the reasonably documented ALERTS and the under-documented new metric ALERTS_FOR_STATE. Both metrics have all of the labels of the alert (although not its annotations), and also an 'alertname' label; the ALERTS metric also has an additional 'alertstate' metric. The value of the ALERTS metric is always '1', while the value of ALERTS_FOR_STATE is the Unix timestamp of when the alert rule expression started being true; for rules with 'for' delays, this means that it is the timestamp when they started being 'pending', not when they became 'firing' (see this rundown of the timeline of an alert).

(The ALERTS_FOR_STATE metric is an internal one added in 2.4.0 to persist the state of alerts so that 'for' delays work over Prometheus restarts. See "Persist 'for' State of Alerts" for more details, and also Prometheus issue #422. Because of this, it's not exported from the local Prometheus and may not be useful to you in clustered or federated setups.)

The ALERTS_FOR_STATE metric is quite useful if you want to know the start time of an alert, because this information is otherwise pretty much unavailable through PromQL. The necessary information is sort of in Prometheus's time series database, but PromQL does not provide any functions to extract it. Also, unfortunately there is no good way to see when an alert ends even with ALERTS_FOR_STATE.

(In both cases the core problem is that alerts that are not firing don't exist as metrics at all. There are some things you can do with missing metrics, but there is no good way to see in general when a metric appears or disappears. In some cases you can look at the results of manually evaluating the underlying alert rule expression, but in other cases even this will have a null value when it is not active.)

We can do some nice things with ALERTS_FOR_STATE, though. To start with, we can calculate how long each current alert has been active, which is just the current time minus when it started:

time() - ALERTS_FOR_STATE

If we want to restrict this to alerts that are actually firing at the moment, instead of just being pending, we can write it as:

    (time() - ALERTS_FOR_STATE)
and ignoring(alertstate) ALERTS{alertstate="firing"}

(We must ignore the 'alertstate' label because the ALERTS_FOR_STATE metric doesn't have it.)

You might use this in a dashboard where you want to see which alerts are new and which are old.

A more involved query is one to tell us the longest amount of time that a firing alert has been active over the past time interval. The full version of this is:

max_over_time( ( (time() - ALERTS_FOR_STATE)
                  and ignoring(alertstate)
                         ALERTS{alertstate="firing"}
               )[7d:] )

The core of this is the expression we already saw, and we evaluate it over the past 7 days, but until I thought about things it wasn't clear why this gives us the longest amount of time for any particular alert. What is going on is that while an alert is active, ALERTS_FOR_STATE's value stays constant while time() is counting up, because it is evaluated at each step of the subquery. The maximum value of 'time() - ALERTS_FOR_STATE' happens right before the alert ceases to be active and its ALERTS_FOR_STATE metric disappears. Using max_over_time captures this maximum value for us.

(If the same alert is active several times over the past seven days, we only get the longest single time. There is no good way to see how long each individual incident lasted.)

We can exploit the fact that ALERTS_FOR_STATE has a different value each time an alert activates to count how many different alerts activated over the course of some range. The simplest way to do this is:

changes( ALERTS_FOR_STATE[7d] ) + 1

We have to add one because going from not existing to existing is not counted as a change in value for the purpose of changes(), so an alert that only fired once will be reported as having 0 changes in its ALERTS_FOR_STATE value. I will leave it as an exercise to the reader to extend this to only counting how many times alerts fired, ignoring alerts that only became pending and then went away again (as might happen repeatedly if you have alerts with deliberately long 'for' delays).

(This entry was sparked by a recent prometheus-users thread, especially Julien Pivotto's suggestion.)

PrometheusAlertStartTimeStuff written at 22:15:26; Add Comment

2019-05-24

The problem of paying too much attention to our dashboards

On Mastodon, I said:

Our Grafana dashboards are quite shiny, at least to me (since I built them), but I really should start resisting the compulsive urge to take a look at them all the time just to see what's going on and look at the pretty zigzagging lines.

I have a bad habit of looking at shiny things that I've put together, and dashboards are extremely shiny (even if some of them are almost all text). There are two problems with this, the obvious and the somewhat subtle.

The obvious problem is that, well, I'm spending my time staring somewhat mindlessly at pretty pictures. It's interesting to look at lines wiggle around or collections of numbers, but it's generally not informative. It's especially not informative for our systems because our systems spend almost all of their time working fine, which means that there is no actual relevant information to be had from all of these dashboards. In terms of what I spend (some) time on, I would be better off if we had one dashboard with one box that said 'all is fine'.

This is a general issue with dashboards for healthy environments; if things are fine, your dashboards are probably telling you nothing or at least nothing that is of general interest and importance.

(Your dashboards may be telling you details and at some point you may want access to those details, like how many email messages you typically have in your mailer queues, but they are not generally important.)

The more subtle problem is the general problem of metrics, which is a variant of Goodhart's law. Once you have a metric and you pay attention to the metric, you start to focus on the metric. If you have a dashboard of metrics, it's natural to pay attention to the metrics and to exceptions in the metrics, whether or they actually matter. It may or may not matter that a machine has an unusually high load average, but if it's visible, you're probably going to focus on it and maybe dig into it. Perhaps there is a problem, but often there isn't, especially if you're surfacing a lot of things on your dashboards because they could be useful.

(One of the things behind this is that all measures have some amount of noise and natural variation, but as human beings we have a very strong drive to uncover patterns and meaning in what we see. If you think you see some exceptional pattern, it may or may not be real but you can easily spend a bunch of time trying to find out and testing theories.)

My overall conclusion from my own experiences with our new dashboards and metrics system is that if you have good alerts, you (or at least I) would be better off only looking at dashboards if there is some indications that there are actual problems, or if you have specific questions you'd like to answer. In practice, trawling for 'is there anything interesting' in our dashboards is a great way to spend some time and divert myself down any number of alleyways, most of them not useful ones.

(In a way the worst times are the times when looking at our dashboards actually is useful, because that just encourages me to do it more.)

PS: This is not the first time I've seen the effects of something like this; I wrote about an earlier occasion way back in Metrics considered dangerous.

DashboardAttentionProblem written at 23:15:38; Add Comment

2019-05-20

Understanding how to pull in labels from other metrics in Prometheus

Brian Brazil recently wrote Analyse a metric by kernel version, where he shows how to analyze a particular metric in a new way by, essentially, adding a label from another metric to the metric, in this case the kernel version. His example is a neat trick, but it's also reasonably tricky to understand how it works, so today I'm going to break it down (partly so that I can remember this in six months or a year from now, when my PromQL knowledge has inevitably rusted).

The query example is:

avg without (instance)(
    node_sockstat_TCP_tw 
  * on(instance) group_left(release)
    node_uname_info
)

The simple version of what's happening here is that because node_uname_info's value is always 1, we're using '*' as a do-nothing arithmetic operator so we can essentially do a join between node_sockstat_TCP_tw and node_uname_info to grab a label from the latter. We have to go to these lengths because PromQL does not have an explicit 'just do a join' operator that can be used with group_left.

There are several things in here. Let's start with the basic one, which is the '* on(instance)' portion. This is one to one vector matching with a restriction on what label is being used to match up pairs of entries; we're implicitly restricting the multiplication to pairs of entries with matching 'instance' labels. Normally 'instance' will be the same for all metrics scraped from a single host's node_exporter, so it makes a good label for finding the node_uname_info metric that corresponds to a particular host's node_sockstat_TCP_tw metric.

(We have to use 'on (...)' because not all labels match. After all, we're pulling in the 'release' label from the node_uname_info metric; if it was already available as a label on node_sockstat_TCP_tw, we wouldn't need to do this work at all.)

Next is the group_left, which is being used here for its side effect of incorporating the 'release' label from node_uname_info in the label set of the results. I wrote about the basics of group_left's operation in Using group_* vector matching for database lookups, where I used group_left basically as a database join between a disk space usage metric and an alert level metric that also carried an additional label we wanted to include for who should get alerted. Brian Brazil's overall query here is similar to my case, except that here we don't care about the value that the node_uname_info metric has; we are only interested in its 'release' label.

In an ideal world, we could express this directly in PromQL to say 'match between these two metrics based on instance and then copy over the release label from the secondary one'. In this world, unfortunately group_left and group_right have the limitation that they can only be used with arithmetic and comparison operators. In my earlier entry this wasn't a problem because we already wanted to compare the values of the two metrics, Here, we don't care about the value of node_uname_info at all. Since we need an arithmetic or comparison operator in order to use group_left and we want to ignore the value of node_uname_info, we need an operator that will leave node_sockstat_TCP_tw's value unchanged. Because the value of node_uname_info is always 1, we can simply use '*', as multiplying by one will do nothing here.

(In theory we could instead use a comparison operator, which would naturally leave node_sockstat_TCP_tw's value unchanged (more or less cf). However, in practice it's often tricky to find a comparison operator that will always be true. You might not have any sockets in TIME_WAIT so a '>=' could be false here, for example. Using an arithmetic operator that will have no effect is simpler.)

The case of a secondary metric that's always 1 is the easy case, as we've seen. What about a secondary metric with a label you want that isn't necessarily always 1, and in fact may have an arbitrary value? Fortunately, Brian Brazil has provided the answer to that too. The simple but clever trick is to multiply the metric by zero and then add it:

  node_sockstat_TCP_tw
+ on(instance) group_left(release)
  (node_uname_info * 0)

This works with arbitrary values; multiplying by zero turns the value for the right side to 0, and then adding 0 has no effect on node_sockstat_TCP_tw's value.

As a side note, this illustrates a good reason to have '1' be the value of any metric that exists to publish its labels, as is the case for node_uname_info or metrics that publish, say, the version of your program. The value these metrics have is arbitrary in one sense, but '1' is both conventional and convenient.

PrometheusPullingInLabels written at 22:16:32; Add Comment

2019-05-17

My new favorite tool for looking at TLS things is certigo

For a long time I've used the OpenSSL command line tools to do things like looking at certificates and chasing certificate chains (although OpenSSL is no longer what you want to use to make self-signed certificates). This works, and is in many ways the canonical and most complete way to do this sort of stuff, but if you've ever used the openssl command and its many sub-options you know that it's kind of a pain in the rear. As a result of this, for some years now I've been using Square's certigo command instead.

Certigo has two main uses. My most common case is to connect to some TLS-using service to see what its active certificate and certificate chain is (and try to verify it), as well as some TLS connection details:

$ certigo connect www.cs.toronto.edu:https
** TLS Connection **
Version: TLS 1.2
Cipher Suite: ECDHE_RSA key exchange, AES_128_GCM_SHA256 cipher

** CERTIFICATE 1 **
Valid: 2018-04-17 00:00 UTC to 2020-04-16 23:59 UTC
Subject:
[...]

Certigo will attempt to verify the certificate's OCSP status, but some OCSP verifiers seem to dislike its queries. In particular, I've never seen it succeed with Let's Encrypt certificates; it appears to always report 'ocsp: error from server: unauthorized'.

(Some digging suggest that Certigo is getting this 'unauthorized' response when it queries the OCSP status of the intermediate Let's Encrypt certificate.)

Certigo can connect to things that need STARTTLS using a variety of protocols, including SMTP but unfortunately not (yet) IMAP. For example:

$ certigo connect -t smtp smtp.cs.toronto.edu:smtp

(Fortunately IMAP servers usually also listen on imaps, port 993, which is TLS from the start.)

My other and less frequent use of Certigo is to dump the details of a particular certificate that I have sitting around on disk, with 'certigo dump ...'. If you're dumping a certificate that's in anything except PEM format, you may have to tell Certigo what format it's in.

Certigo also has a 'certigo verify' operation that will attempt to verify a certificate chain that you provide it (against a particular host name). I don't find myself using this very much, because it's not necessarily representative of what either browsers or other sorts of clients are going to do (partly because it uses your local OS's root certificate store, which is not necessarily anything like what other programs will use). Generally if I want to see a client-based view of how a HTTPS server's certificate chain looks, I turn to the SSL server test from Qualys SSL Labs.

All Certigo sub-commands take a '-v' argument to make them report more detailed things. Their normal output is relatively minimal, although not completely so.

Certigo is written in Go and uses Go's standard libraries for TLS, which means that it's limited to the TLS ciphers that Go supports. As a result I tend to not pay too much attention to the initial connection report unless it claims something pretty unusual.

(It also turns out that you can get internal errors in Certigo if you compile it with the very latest development version of Go, which may have added TLS ciphers that Certigo doesn't yet have names for. The moral here is likely that if you compile anything with bleeding edge, not yet released Go versions, you get to keep both pieces if something breaks.)

InspectingTLSWithCertigo written at 22:53:21; Add Comment

2019-05-12

What we'll want in a new Let's Encrypt client

Over on Twitter, I said:

It looks like we're going to need a new Let's Encrypt client to replace acmetool (which we love); acmetool uses the v1 API and seems to no longer be actively developed, and the v1 API runs into problems in November: <link: End of Life Plan for ACMEv1>

(There is an unfinished ACMEv2 branch of acmetool, but, and also. It would be ideal if the community stepped forward to continue acmetool development, but sadly I don't see signs of that happening so far and I can't help with such work myself.)

November is when Let's Encrypt will turn off new account registrations through ACMEv1, which is a problem for us because we don't normally re-use Let's Encrypt accounts (for good reasons, and because it's easier). So in November, we would stop being able to install acmetool on new machines without changing our procedures to deliberately reuse accounts. Since doing so would only prolong things, we should get a new client instead. As it happens, we would like something that is as close to acmetool as possible, because acmetool is basically how we want to handle things.

Rather than try to write a lot of words about why we like acmetool so much (with our custom configuration file), I think it's simpler to demonstrate it by showing you the typical install steps for a machine:

apt-get install acmetool
mkdir /var/lib/acme/conf
cp <master>/responses /var/lib/acme/conf/
acmetool quickstart

acmetool want NAME1 ALIAS2 ...

(Alternately, we copy /var/lib/acme from the live version of the server. We may do both, using 'acmetool want' during testing and then overwriting it with the official version when we go to production.)

After this sequence, we have a new Let's Encrypt account, a cron job that automatically renews certificates at some random time of the day when they are 30 days (or less) from expiry, and a whole set of certificates, intermediate chains, and keys accessible through /var/lib/acme/live/<NAME1>/ and so on, with appropriate useful permissions (keys are root only normally, but everything else is generally readable). When a certificate is renewed, acmetool will reload or restart any potentially certificate-using service that is active on the machine. If we want to add additional certificates for different names, that's another 'acmetool want NAME2' (and then the existing cron job automatically renews them). All of this works on machines that aren't running a web server as well as machines that are running a properly configured one (and these days the Ubuntu 18.04 acmetool package sets that up for Apache).

(We consider it a strong feature that acmetool doesn't otherwise attempt to modify the configurations of other programs to improve their ability to automatically do things with Let's Encrypt certificates.)

Acmetool accomplishes this with a certain amount of magic. Not only does it keep track of state (including what names you want certificates for, even if you haven't been able to get them yet), but it also has some post-issuance hook scripts that do that magic reloading. The reloading is blind (if you're running Apache, it gets restarted whether or not it's using TLS or acmetool's certificates), but this hasn't been a problem for us and it sure is convenient.

We can probably duplicate a lot of this by using scripts on top of some other client, such as lego. But I would like us to not need a collection of home-grown scripts (and likely data files) to mimic the simplicity of operation that acmetool provides. Possibly we should explore Certbot, the more or less officially supported client, despite my long-ago previous experiences with it as a heavyweight, complex, and opinionated piece of software that wanted to worm its way into your systems. Certbot seems like it supports all of what we need and can probably be made to cooperate, and it has a very high chance of continuing to be supported in the future.

(A lot of people like minimal Let's Encrypt clients that leave you to do much of the surrounding work yourself. We don't, partly because such additional work adds many more steps to install instructions and opens the door to accidents like getting a certificate but forgetting to add a cron job that renews it.)

(My only experimentation with Certbot were so long ago that it wasn't called 'certbot' yet. I'm sure that a lot has changed since then, and that may well include the project's focus. At the time I remember feeling that the project was very focused on people who were entirely new to TLS certificates and needed a great deal of hand-holding and magic automation, even if that meant Certbot modifying their system in all sorts of nominally helpful ways.)

LetsEncryptClientWants written at 22:30:52; Add Comment

2019-05-03

Some implications of using offset instead of delta() in Prometheus

I previously wrote about how delta() can be inferior to subtraction with offset, because delta() has to load the entire range of metric points and offset doesn't. In light of the issue I ran into recently with stale metrics and range queries, there turn out to be some implications and complexities of using offset in place of delta(), even if it lets you make queries that you couldn't otherwise do.

Let's start with the basics, which is that 'delta(mymetric[30d])' can theoretically be replaced with 'mymetric - mymetric offset 30d' to get the same result with far fewer metric points having to be loaded by Prometheus. This is an important issue for us, because we have some high-cardinality metrics that it turns out we want to query over long time scales like 30 or 90 days.

The first issue with the offset replacement is what happens when a particular set of labels for the metric didn't exist 30 days ago. Just like PromQL boolean operators (cf), PromQL math operators on vectors are filters, so you'll ignore all current metric points for mymetric that didn't exist 30 days ago. The fix for this is the inverse of ignoring stale metrics:

(mymetric - mymetric offset 30d) or mymetric

Here, if mymetric didn't exist 30 days ago we implicitly take its starting value as 0 and just consider the delta to be the current value of mymetric. Under some circumstances you may want a different delta value for 'new' metrics, which will require a different computation.

The inverse of the situation is metric labels that existed 30 days ago but don't exist now. As we saw in an earlier entry, the range query in the delta() version will include those metrics, so they will flow through to the delta() calculation and be included in your final result set. Although the delta() documentation sort of claims otherwise, the actual code implementing delta() reasonably doesn't currently extrapolate samples that start and end significantly far away from the full time range, so the delta() result will probably be just the change over the time series points available. In some cases this will go to zero, but in others it will be uninteresting and you would rather pretend that the time series is now 0. Unfortunately, as far as I know there's no good way to do that.

If you only care about time series (ie label sets) that existed at the start of the time period, I think you can extend the previous case to:

((mymetric - mymetric offset 30d) or mymetric)
    or -(mymetric offset 30d)

(As before, this assumes that a time series that disappears is implicitly going to zero.)

If you care about time series that existed in the middle of the time range but not at either the beginning or the end, I think you're out of luck. The only way to sweep those up is a range query and using delta(), which runs the risk of a 'too many metric points loaded' error.

Unfortunately all of this is increasingly verbose, especially if you're using label matches restricting mymetric to only some values (because then you need to propagate these label restrictions into at least the or clauses). It's a pity that PromQL doesn't have any function to do this for us.

I also have to modify something I said in my first entry on offset and delta(). Given all of these issues with appearing and disappearing time series, it's clear that optimizing delta() to not require the entire range is not as simple as it looks. It would probably require some deep hooks into the storage engine to say 'we don't need all the points, just the start and the end points and their timestamps', and that stuff would only be useful for gauges (since counters already have to load the entire range set and sweep over it looking for counter resets).

In our current usage we care more about how the current metrics got there than what the situation was in the past; we are essentially looking backward to ask what disk space usage grew or shrank. If some past usage went to zero and disappeared, it's okay to exclude it entirely. There are some potentially tricky cases that might cause me to rethink that someday, but for now I'm going to use the shorter version that only has one or, partly because Grafana makes it a relatively large pain to write complicated PromQL queries.

PrometheusDeltaVsOffsetII written at 00:45:45; Add Comment

2019-04-28

A gotcha with stale metrics and *_over_time() in Prometheus

We monitor the available disk space on our fileservers through custom metrics, and we allow people to set alerts on various levels of low disk space, which are evaluated through a Prometheus alert rule. Because several people are allowed to each set alerts on the same filesystem, we have to use group_right in the alert expression. A simplified version of this expression is:

cslab_avail_bytes <= on (filesystem)
  group_right(fileserver, pool) cslab_alert_minfree

Every so often we migrate a filesystem between fileservers (and ZFS pools). For a long time now, for a relatively short while after each migration, we have been getting a mysterious error from Prometheus about this alert rule:

err="found duplicate series for the match group {filesystem=\"/h/103\"} on the left hand-side of the operation: [...]; many-to-many matching not allowed: matching labels must be unique on one side

The two duplicate series Prometheus would report were old fileserver and new fileserver versions of the filesystem. What made this mysterious to me was that these two time series never existed at the same time. In every migration, the old time series would end and then a few minutes later the new time series would appear (after the filesystem was migrated, remounted on the machine that generates these metrics, and the metrics generation ran). What it looked like to me was that Prometheus was improperly using a stale metric in alert rule evaluation for some reason.

The explanation ultimately turned out to be very simple. You see, I have simplified the alert expression a little too much, omitting something that I didn't realize made a crucial difference. The real version is more like this:

avg_over_time(cslab_avail_bytes[15m]) <= on (filesystem)
  group_right(fileserver, pool) cslab_alert_minfree

The avg_over_time and its range expression makes all the difference. Even though the old fileserver's time series is stale now, it was non-stale at some time a few minutes in the past, and so the range expression dutifully sweeps it up and avg_over_time produces a result for it, creating a duplicate, and then Prometheus quite correctly errors out.

This applies to all *_over_time() things, of course, and in general to any range vector expression; they will sweep up metrics that were valid at any point over the time range, not just metrics that are valid now (well, at the end of the period). If you want to restrict your range vector or *_over_time() to only metrics that are non-stale at the end of the time period, you need to say so explicitly, by using an 'and' operator:

(avg_over_time(cslab_avail_bytes[15m]) and cslab_avail_bytes) [...]

Depending on your particular circumstances, this could have some subtle effects on such 'migrating' metrics. Early on, the new, non-stale metric will have only a few valid data points within the time range, and I believe that the *_over_time() will happily use only these available data points to compute your average or whatever. That could mean that you are actually averaging over a much smaller time period than you think.

For our purposes, the easier and more straightforward fix is to remove the ``avg_over_time()' and use a 'for: 15m' on the alert rule instead. This doesn't have quite the same effect, but the effects it does have are probably easier to explain to people who are setting and getting low disk space alerts.

Once I thought about it, it was intuitive that range vectors and things that use them will return values for (currently) stale time series, but it hadn't occurred to me before I realized it. In fact I was so oblivious to the possibility that when I asked the Prometheus mailing list about this, I provided the first, over-simplified version of the alert rule, because I didn't think the avg_over_time() mattered. As we see, it very much does.

PS: I think this means that if you have an alert rule using a range vector (in a *_over_time() or otherwise), and the alert is currently firing, and you deal with it by removing the underlying metric (perhaps it's coming from an outdated source), the alert may continue firing for a while because the range vector will keep picking up the old values of the now-stale metric until they fall outside of its time range. But probably not very many people deal with spurious alerts by stopping monitoring the metric source.

(With us, it can happen if a machine is converted from something we're monitoring into an experimental machine that may have wild stuff done to it. We tend not to monitor experimental machines at all, so our natural reaction is to remove the existing monitoring when we get reminded of the machine's change in purpose.)

PrometheusStaleMetricsOverTime written at 22:52:03; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.