Downsampling your metrics data is a compromise (what we could call a 'hack')

May 15, 2021

In a comment on my entry on the size of our Prometheus setup, mappu raised an interesting issue:

I've always thought the default infinite-retention of Prometheus' data stores to be ridiculous when coming from RRD-based monitoring solutions like Munin.

It's not useful to keep 15-second resolution indefinitely, RRD is the right thing to do, and it's a shame it's such a weird hack to get that behaviour on top of Prometheus.

I've come to disagree with the idea of downsampling your data by default. Today, if Prometheus offered me the possibility, I would not use it for as long as possible. The core reason is the same reason as why statistics should be gathered in raw form instead of sample to sample deltas; you can down-sample on the fly to go from high resolution to low resolution data, but you can never up-sample. Once you discard your high-resolution data, it's gone for good. So by default you should avoid losing data for as long as possible.

The two big reasons to use downsampled data are that it doesn't need as much disk space and you don't have to scan and process as much data when you're looking at it. But both are operational issues. If you had infinite space and infinite processing capacity, neither would matter and it would work just as well to keep high resolution data. So in the pragmatic world, I believe that you should default to keeping your metrics data in its original form until you're forced to change that.

(Of course this is where one side of things notes that metrics data is already downsampled from the original, which is either extremely fine-grained moment to moment statistics or observability traces. But everything is a compromise. We use the default Prometheus scrape interval of 15 seconds for our host metrics, not because I thought it through carefully but because it's the default and it seems to work okay for us.)

I can't definitely say we've ever required our full resolution data from a year or two years ago in order to solve a problem. But at the very least it's reassuring to me that I have that fine-grained data, if only so that if I ever need to run a detailed comparison between performance now and performance a year ago, I can be confident I have just as good data for then as I do for now.

PS: I agree that it would be nice if Prometheus had native support for downsampling data, instead of forcing you into various hacks in order to implement it externally (hacks that I'm not sure work very well if you want to set them up after you've accumulated a lot of historical data). But I think of this as a separate issue from whether you want to downsample by default.


Comments on this page:

By adrien at 2021-05-16 05:08:36:

If you collect personal data, down-sampling can be useful. For instance you could collect the data and after e.g. 30 days, the downsampling turns the data into not-personal or at least less sensitive.

If you collect bandwidth graphs for users, you can therefore have per-minute amounts (pretty sensitive data) and after a while, downsample that to per-day or per-week or per-month data (far less sensitive).

If you provide VPN services, you can collect the origin IPs of the users (very sensitive) and keep it for at most one month, after which you only keep the number of different IPs used during that time.

I don't have a very good example for pure time-based downsampling offhand but you can easily see such cases if you think about correlations. If initially you know X happened and Y happened at the same time, if your data is downsampled you lose precision and instead of having only Y that happened at the same time, you also have all the events of the day: you're not able anymore to precisely identify one person, you end up with 100 candidates.

By Simon at 2021-05-17 13:49:46:

As the previous comment already points out a retention policy of "keep as long as technically possible with moderate effort" is really problematic for data that can be personal data (either directly or through combination with other data sources).

What makes this particular tricky is that it's often not obvious which data is sensitive. For example consider you record the temperature in your machine rooms (either directly with dedicated sensors or indirectly for example through intake temperatures of you server cooling). On first this seems like a pretty reasonable and uncontroversial idea. But it probably has enough resolution (both per sample as well as time resolution) that (depending on your workflows) it allows detection when people enter/leave you machine rooms. So depending on who has when a need to enter a machine room and how those people are employed you suddenly have a monitoring of work activities of your employees. Depending on where you are living such monitoring can be strictly regulated. In that case "we keep the data as long as the hard drives aren't too expensive" isn't a good defense for keeping data ... If, for example, after a few days you only store min/avg/max per day you probably have no problem.

In your case you at least have full control over your data storage (although it seems Prometheus makes some use cases significantly harder to implement, based on the entry). This "keep it if you can" mentality can be even more tricky if you start buying appliances. For example I was once at a school that brought an RFID based access control system for nearly all it doors. One problem was that you couldn't disable the integrated logging. But this meant the school was monitoring the activity of it's teachers to an extend that wasn't allowed by the applicable law.

@cks: Do you have the same policy for logs (syslog, http access log, mail log, etc.)? What about backups (here I would expect nobody wants to afford to keep data indefinitely even with quite moderate system scale)?

By roidelapluie at 2021-05-27 16:20:34:

Note: I am a member of the Prometheus team.

In the April dev summit of Prometheus, we decided that we wanted downsampling in Prometheus. It would be based on what thanos is doing, and is something that probably will require a small design document first. I would not hold my breath.

We are also planning to have TTL per metric, such as less important metrics could be deleted earlier (e.g. drop metrics from "env=dev" after 30 days).

Written on 15 May 2021.
« The size of our Prometheus setup as of May 2021
Unix job control and its interactions with TTYs (and shells) »

Page tools: View Source, View Normal, Add Comment.
Search:
Login: Password:
Atom Syndication: Recent Comments.

Last modified: Sat May 15 23:59:41 2021
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.