There is such a thing as too much SQL (a reminder to myself)
This is a very small war story.
I've spent part of today working on generating some reports from our IP traffic accounting system. We have multiple internal networks that get NAT'd out different gateways, and what we want is a report (or a set of reports) of the top individual traffic sources for each gateway that did more than a certain amount of aggregate traffic in the day.
Our traffic accounting data is held in a PostgreSQL database. Since a typical day has around six to eight million records (and this is after a record aggregation process), making efficient SQL queries for this reporting is relatively important; even if we're only going to run it once a day, it would be nice if it didn't take ages and bog the database down.
Because of how we collect the raw traffic data our traffic monitoring system only has the traffic's internal IP, without explicit information on which NAT gateway the traffic used. The internal IP statically determines the outgoing gateway, but in a complex way; there are subnets of some networks that go out different gateways than their containing network (eg most of a /16 goes out one way, but a /24 in the middle uses another gateway IP), and there are some machines that are assigned individual public IPs. This complexity means that attributing traffic to the right gateway is one of the hard parts of generating this report.
It's possible to do this in SQL and I spent today working out how to do
it (with reasonable efficiency); I needed some mapping tables, one
with CIDR netblocks and one with the exceptional machines, and some
NOT IN work. Then I ran into the subnet challenge. As I
was working out how to split the remaining IP address ranges of the
overall network with the subnet into multiple CIDR netblocks, I came to
A quick ad-hoc query showed me that we have less than a thousand different internal IP addresses generating traffic on any given day. Yes, I could do the gateway attribution in SQL, but doing it in SQL makes no sense; with so few internal IP addresses to process it's overkill and almost insanity. The right approach to my problem is to only use SQL for the heavy work of data reduction by generating a 'volume by internal IP' report and then to do the complex work of attributing each internal IP to a particular gateway in another program, in an environment where I have the full power of a procedural programming language (and where I am not potentially running the attribution process seven million times but instead only a few hundred times).
(As a bonus I'm going to do one less volume-aggregation query for reasons that don't fit in this entry's margin. And I no longer need those auxiliary mapping tables, which means that we don't have to maintain and update them.)
As I've noted to myself before, SQL is not the answer to everything. Today was a sharp reminder of this, although fortunately it was not all that painful a one. (Still, I'm going to mourn my nice SQL query a bit.)
PS: if there is a fast, efficient way to get PostgreSQL to tell you the most specific CIDR match for an IP address out of a table of CIDRs (in the context of an overall SELECT involving lots of different IP addresses), please don't tell me what it is. I don't need the temptation.
(Okay, I'd be interested to know what it is, but I'm always curious about things like that.)