Using Prometheus subqueries to do calculations over time ranges

February 27, 2019

Subqueries are a new feature in Prometheus 2.7. Their usual use is to nest time range queries, such as a max_over_time of a rate, as covered in, for example, Brian Brazil's How much of the time is my network usage over a certain amount?. However, they can be used in another, perhaps less obvious way. Put simply, subqueries let you use time based aggregation on expressions.

Suppose, for example, that you are collecting basic NTP information from your NTP servers, including their current time and the time at which they last set their clock. As an instant query, the current amount of time since a server set its clock is:

sntp_time_seconds - sntp_clockset_seconds

You can graph this instant query over time to get a nice picture of how frequently the server resets its time. However, now suppose we want to know the maximum amount of time that a server has gone between clock updates over the past week. If we had a single metric for this, this would be straightforward:

max_over_time( sntp_clock_age_seconds [1w] )

However, we don't. Before subqueries, working this out was impossible; you couldn't put an expression inside max_over_time, and the best we could do was graph our instant query and eyeball where the top of the graph fell. But with subqueries, we can now do calculations inside max_over_time:

max_over_time ( (sntp_time_seconds - sntp_clockset_seconds) [1w:] )

(You have to put the ':' into the time range to mark it as a subquery; it's required by the syntax. I find this a little bit annoying since it can't be anything but a subquery here.)

And so when I wrote yesterday's entry about ntpdate's surprising restriction on what it will sync to, I could confidently talk about how our three different NTP daemons seem to have three different types of behavior (which was something that wasn't clear at all from the graphs).

(The mention of subqueries in Querying basics sort of implies this, by talking about starting from an 'instant query'.)

PS: Somewhat to my surprise, Prometheus lets you do an instant query where the result is a range vector, eg 'metric[10m]'. For a simple metric range vector, the results you get back are the values at the various timestamps where the metric was scraped. This is actually useful because the timestamps themselves (and how many results you get for a given time range) give you the true scrape frequency for the metric, which is not otherwise available. If you ask for a '[15m]' of a metric that is only scraped once every five minutes, you only get three time points in the answer; if it's scraped every minute, you get fifteen.

(This works both in the web interface and in the underlying HTTP API. In the web interface you get both values and timestamps displayed in the console tab, but you unsurprisingly can't graph the result. In the API, you get a JSON values array instead of the usual single value.)

Written on 27 February 2019.
« ntpdate has a surprising restriction on what it will sync to
How to see and flush the Linux kernel NFS server's group membership cache »

Page tools: View Source, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Wed Feb 27 00:14:41 2019
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.