Wandering Thoughts

2017-06-25

One tradeoff in email system design is who holds problematic email

When you design parts of a mail system, for example a SMTP submission server that users will send their email out through or your external MX gateway for inbound email, you often face a choice of whether your systems should accept email aggressively or be conservative and leave email in the hands of the sender. For example, on a submission server should you accept email from users with destination addresses that you know are bad, or should you reject such addresses during the SMTP conversation?

In theory, the SMTP RFCs combined with best practices give you an unambiguous answer; here, the answer would be that clearly the submission server should reject known-bad addresses at SMTP time. In practice things are not so simple; generally you want problematic email handled by the system that can do the best job of dealing with it. For instance, you may be extremely dubious about how well your typical mail client (MUA) will handle things like permanent SMTP rejections on RCPT TO addresses, or temporary deferrals in general. In this case it can make a lot of sense to have the submission machine accept almost everything and sort it out later, sending explicit bounce messages to users if addresses fail. That way at least you know that users will get definite notification that certain addresses failed.

A similar tradeoff applies on your external MX gateway. You could insist on 'cut-through routing', where you don't say 'yes' during the initial SMTP conversation until the mail has been delivered all the way to its eventual destination; if there's a problem at some point, you give a temporary failure and the sender's MTA holds on to the message. Or you could feel it's better for your external MX gateway to hold inbound email when there's some problem with the rest of your mail system, because that way you can strongly control stuff like how fast email is retried and when it times out.

Our current mail system (which is mostly described here) has generally been biased towards holding the email ourselves. In the case of our user submission machines this was an explicit decision because at the time we felt we didn't trust mail clients enough. Our external MX gateway accepted all valid local destinations for multiple reasons, but a sufficient one is that Exim didn't support 'cut-through routing' at the time so we had no choice. These choices are old ones, and someday we may revisit some of them. For example, perhaps mail clients today have perfectly good handling of permanent failures on RCPT TO addresses.

(A accept, store, and forward model exposes some issues you might want to think about, but that's a separate concern.)

(We haven't attempted to test current mail clients, partly because there are so many of them. 'Accept then bounce' also has the benefit that it's conservative; it works with anything and everything, and we know exactly what users are going to get.)

WhoHoldsEmailTradeoffs written at 01:01:13; Add Comment

2017-06-19

How I'm currently handling the mailing lists I read

I recently mentioned that I was going to keep filtering aside email from the few mailing lists that I'm subscribed to, instead of returning it to being routed straight into my inbox. While I've kept to my decision, I've had to spend some time fiddling around with just how I was implementing it in order to get a system that works for me in practice.

What I did during my vacation (call it the vacation approach) was to use procmail recipes to put each mailing list into a file. I'm already using procmail, and in fact I was already recognizing mailing lists (to insure they didn't get trapped by anti-spam stuff), so this was a simple change:

:0:
* ^From somelist-owner@...
lists/somelist
#V#$DEFAULT

This worked great during my vacation, when I basically didn't want to pay attention to mailing lists at all, but once I came back to work I found that filing things away this way made them too annoying to deal with in my mail environment. Because MH doesn't deal directly with mbox format files, I needed to go through a whole dance with inc and then rescanning my inbox and various other things. It was clear that this wasn't the right way to go. If I wanted it to be convenient to read this email (and I did), incoming mailing list messages had to wind up in MH folders. Fortunately, procmail can do this if you specify '/some/directory/.' as the destination (the '/.' is the magic). So:

:0:
* ^From somelist-owner@...
/u/cks/Mail/inbox/somelist/.

(This is not quite a complete implementation, because it doesn't do things like update MH's unseen sequence for the folder. If you want these things, you need to pipe messages to rcvstore instead. In my case, I actually prefer not having an unseen sequence be maintained for these folders for various reasons.)

The procmail stuff worked, but I rapidly found that I wanted some way to know which of these mailing list folders actually had pending messages in them. So I wrote a little command which I'm calling 'mlists'. It goes through my .procmailrc to find all of the MH destinations, then uses ls to count how many message files there are and reports the whole thing as:

:; mlists
+inbox/somelist: 3
:;

If there's enough accumulated messages to make looking at the folder worthwhile, I can then apply standard MH tools to do so (either from home with standard command line MH commands, or with exmh at work).

It's early days with this setup, but so far I feel satisfied. The filtering and filing stuff works and the information mlists provides is enough to be useful but sufficiently minimal to push me away from actually looking at the mailing lists for much of the time, which is basically the result that I want.

PS: there's probably a way to assemble standard MH commands to give you a count of how many messages are in a folder. I used ls because I couldn't be bothered to read through MH manpages to work out the MH way of doing it, and MH's simple storage format makes this kind of thing easy.

MailingListsHandling-2017-06 written at 00:17:57; Add Comment

2017-06-11

Why filing away mailing lists for a while has improved my life

I've been on vacation for the past little while. As part of this vacation, I carried out my plans to improve my vacations, part of which was using procmail to divert messages from various mailing lists off to files instead of having them delivered to my inbox as I usually do. I started out only doing this to mailing lists for work software, like Exim and OmniOS, but as my vacation went on I added the mailing lists for other things that I use. As I hoped and expected, this worked out quite well; I soon got over my urge to check in on the mailing lists and mostly ignored them.

Recently I came to a realization about why this feels so good. It's not specifically that it's reduced the volume of email in my inbox; instead, the really important thing it's done is that right now, pretty much everything that shows up in my inbox is actually important to me. It's email from friends and family, notifications that I care about getting, and so on.

(Coming to this realization and writing it up has sharpened my awareness that some of the remaining email going to my inbox doesn't make this bar, and thus should also be filed away on future breaks and vacations.)

There's nothing wrong with the emails from those mailing lists. They're generally perfectly interesting. But right now (and in general) the mailing list email is not important in that way. It's not something that I care about. When it all was going into my inbox, a significant amount of my inbox was stuff that I didn't really care about. That doesn't feel good (and has other effects). Now my inbox is very pared down; it's either silent and empty, or the new email is something that I actively want to read because it matters to me.

(In other words, it's not just that processing my inbox is faster now, it's that the payoff from doing so is much higher. And when there is no payoff, there's no email.)

If I'm being honest about these mailing lists, most of this is going to be true even when I go back to work tomorrow morning. Sure, if I've just asked a question or gotten into a conversion, reading the mailing list immediately usually has a relatively high payoff. But at other times, the payoff is much lower and having the mailing lists go straight to my inbox is just giving me a slow drizzle of low-priority, low-payoff email that I wind up having to pay some attention to.

In fact I think a drizzle is a good analogy here. Like the moment to moment experience of biking in a light drizzle, the individual emails are not particularly onerous or bad. But the cumulative result of staying out in that light drizzle is that you quietly wind up soaked, bit by bit by bit. So I think it's time for me to get out of the email drizzle for a while, at least to see what it's like on an ongoing basis.

(I intend to still read these mailing list emails periodically, but I'm going to do it in big batches and at a time of my choosing. Over a coffee at the end (or start) of a day at work, perhaps. I'll have to see.)

EmailGettingOutOfTheDrizzle written at 23:21:51; Add Comment

2017-06-09

My .procmailrc has quietly sort of turned into a swamp

As part of trying to not read some mailing lists for a while, I was recently going through my .procmailrc. Doing this was eye-opening. It's not that my .procmailrc was messy as such, because I don't have rules that are sophisticated enough to get messy (just a bunch of 'if mail is like <X>, put it into file Y' filtering rules). Instead, mostly what it had was a whole lot of old, obsolete rules that haven't been relevant for years.

Looking back, it's easy to see how these have quietly accreted over time. Like many configuration files, I almost never look at my .procmailrc globally, scanning through the whole thing. Instead, when I have a new filtering rule I want to add, I jump to what seems to be the right place (often the bottom) and edit the new rule in. If I notice in passing what might be an obsolete filtering rule for a type of email that I don't get any more, usually I ignore it, because investigating is more work and wasn't part of my goal when I did 'vi .procmailrc'.

(The other thing that a strictly local view of changes has done to my .procmailrc is create a somewhat awkward structure for the sequence of rules. This resulted in some semi-duplication of rules and one bit of recent miss-classification, when I got the ordering of two rules wrong because I didn't even realize there was an ordering dependency.)

As a result of stubbing my toe on this, I now have two issues (or problems) I'm considering. The first is what to do about those obsolete rules. Some of them are completely dead and can just be deleted, but others are for things that just might come back to life, even if it hasn't happened for years. There is a part of me that wants to preserve those rules somewhere, just in case I want them again some day. This is probably foolish. Perhaps what I should do is save a backup copy somewhere (or just check my .procmailrc into RCS first).

The second is how much of a revision to do. Having now actively looked at the various things I'm doing and want to do in my .procmailrc, there's a temptation to completely restructure it by splitting my rules into multiple files and then including them in the right spots. This would make where to put new rules clearer to future me, make the overall structure much clearer, and make it simpler to do global things like temporarily divert almost all the mailing lists I get off to files (all those rules would be in one file, so I'd either include it or not include it in my .procmailrc). On the other hand, grand reforms are arguably second system enthusiasm showing. It might be that I'd spend a bunch of time fiddling around with my mail filtering and wind up with a much more complicated, harder to follow setup that basically did the same thing.

(Over-engineering in a fit of enthusiasm is a real hazard.)

PS: applications to other configuration files you might have lying around are left as an exercise for the reader, but I'm certainly suspecting that this is not the only file I have (or that we have) that exhibits this 'maintained locally but not globally' slow, quiet drift into a swamp.

ProcmailrcSwamp written at 01:19:42; Add Comment

2017-06-08

In practice, putting SSDs into 3.5" drive bays is a big hassle

When I talked about how we failed at making all our servers have SSD system disks, I briefly talked about how one issue was that SSDs are not necessarily easily compatible with 3.5" drive bays. If you have never encountered this issue, you may be scratching your head, because basic spacers to let you put 2.5" drives (SSDs included) into 3.5" drive bays are widely available and generally dirt cheap. Sure, you have to screw some extra things on your SSDs, but unless you're working at a much bigger scale than we are, this doesn't really matter.

The problem is that this doesn't always work in servers, depending on how their drive bays work. The fundamental issue is that a 3.5" SATA HD has its power and SATA connectors at the bottom left edge of the drive, as does a 2.5" SSD, and a simple set of spacers can't position the SSD so that both the connectors and the screw holes line up where they need to be. In servers where you manually insert the SATA and power cables and the provided cables are long enough, you can stretch things to make simple spacers work. In servers with exact-length cables or with hot-swap bays that you slide drives into (either with or without carriers), simple spacers don't work and you need much more expensive sleds (such as IcyDock's).

(Sleds are not very expensive in small quantities, but if you're looking at giving a bunch of servers dual SSD system disks and you're planning to use inexpensive SSDs, adding a $15 part to each disk adds up fast.)

We sort of knew about this issue when we started, but we thought it wasn't going to be a big deal. We were wrong. It adds cost and just as important, it adds friction; it's an extra part to figure out, to qualify, to stock, and to reorder when you start running low. You can't just grab a SSD or two and stick them in a random server, even if you have the SSDs; you have to figure out what you need to get the SSDs mounted, perhaps see if you have one or two sleds left, and so on and so forth.

The upshot of all of this is that we're now highly motivated to get 2.5" drive bays in our next set of basic 1U servers, at least for servers with only two drive bays. As a pleasant side benefit, this would basically give us no choice but to use SSDs in these servers, since we don't have any random old 2.5" HDs and we're unlikely to buy new 2.5" HDs.

(Sadly, this issue is basically forced by the constraints of 3.5" and 2.5" HDs. The power and SATA connectors are at the edge of each because that's where the circuit board goes, and it goes on the outside of the drive in order to leave as much space as possible for the platters, the motors, and so on.)

SSDIn3.5DriveBayProblem written at 02:44:37; Add Comment

2017-05-15

How we failed at making all our servers have SSD system disks

Several years ago I wrote an entry about why we're switching to SSDs for system disks, yet the other day there I was writing about how we recycle old disks to be system disks and maybe switching to fixed size root filesystems to deal with some issues there. A reasonable person might wonder what happened between point A and point B. What happened is not any of the problems that I thought might happen; instead it is a story of good intentions meeting rational but unfortunate decisions.

The first thing that happened was that we didn't commit whole-heartedly to this idea. Instead we decided that even inexpensive SSDs were still costly enough that we wouldn't use them on 'less important' machines; instead we'd reuse older hard drives on some machines. This opened a straightforward wedge in our plans, because now we had to decide if a machine was important enough for SSDs and we could always persuade ourselves that the answer was 'no'.

(It would have been one thing if we'd said 'experimental scratch machines get old HDs', but we opened it up to 'less important production machines'.)

Our next step was that we didn't buy (and continue to buy) enough SSDs to always clearly have plenty of SSDs in stock. The problem here is straightforward; if you want to make something pervasive in the servers that you set up, you need to make it pervasive on your stock shelf, and you need to establish the principle that you're always going to have more. This holds just as true for SSDs for us as it does for RAM; once we had a limited supply, we had an extra reason to ration it, and we'd already created our initial excuse when we decided that some servers could get HDs instead of SSDs.

Then as SSD stocks dwindled below a critical point, we had the obvious reaction of deciding that more and more machines weren't important enough to get SSDs as their system disks. This was never actively planned and decided on (and if it had been planned, we might have ordered more SSDs). Instead it happened bit by bit; if I was setting out to set up a server, and we had only (say) four SSDs left, I have to decide on the spot if my server is that important. It's easy to talk myself into saying 'I guess not, this can live with HDs', because I have to make a decision right then on the spot in order to keep moving forward on putting the server together.

(Had we sat down to plan out, say, our next four or five servers that we were going to build and talked about which ones were and weren't important, we might well have ordered more SSDs because the global situation would have been clearer and we would have been doing this further in advance. On the spot decision making is not infrequently driven to be focused on the short term and the immediate perspective, instead of a long term and global one.)

At this point we have probably flipped over to a view that HDs are the default on new or replacement servers and a server has to strike us as relatively special to get SSDs. This is pretty much the inverse of where we started out, although arguably it's a rational and possibly even correct response to budget pressures and so on. In other words, maybe our initial plan was always over-ambitious for the realities of our environment. It did help, because we got SSDs into some important servers and thus we've probably made them less likely to have disk failures.

A contributing factor is that it turned out to be surprisingly annoying to put SSDs in the 3.5" drive bays in a number of our servers, especially Dell R310s, because they have strict alignment requirements for the SATA and power connectors, and garden variety SSD 2.5" to 3.5" adaptors don't put the SSDs at the right place for this. Getting SSDs into such machines required extra special hardware; this added extra hassle, extra parts to keep in stock, and extra cost.

(This entry is partly in the spirit of retrospectives.)

SSDSystemDisksFailure written at 01:23:56; Add Comment

2017-05-08

Some things I've decided to do to improve my breaks and vacations

About a year ago I wrote about my email mistake of mingling personal and work email together and how it made taking breaks from work rather harder. It may not surprise you to hear that I have done nothing to remedy that situation since then. Splitting out email is a slog and I'm probably never going to get around to it. However, there are a couple of cheap tricks that I've decided to do for breaks and vacations (in fact I decided on them last year, but never got around to either writing about them or properly implementing them).

There are a number of public mailing lists for things like Exim and OmniOS that I participate in. I broadly like reading them, learning from them, and perhaps even helping people on them, but at the same time they're not software that I've got a deep personal interest in; I'm primarily on those mailing lists because we use all of these things at work. What I've found in the past is that these mailing lists feed me a constant drip of email traffic that I'm just not that interested in during breaks; after a while it becomes an annoyance to slog through. So now I am going to procmail away all of the traffic from those mailing lists for the duration of any break. Maybe I'll miss the opportunity to help someone, but it's worth it to stop distracting myself. All of that stuff can wait until I'm back in the office.

(I may also do this for some mailing lists for software I use personally. For example, I'm not sure that I need to be keeping up on the latest mail about my RAW processor if I'm trying to take a break from things.)

The other cheap trick is simple. I have a $HOME/adm directory full of various scripts I use to monitor and check in on things about our systems, and one of my fidgets is to run some of them just because. So I'm going to make that directory inaccessible when I'm taking a break by just doing 'chmod 055 $HOME/adm' (055 so that my co-workers can keep using these scripts if they want to). This isn't exactly a big obstacle I've put in my way; I can un-chmod the directory if I want to. But it's enough of a roadblock to hopefully break my habit of reflexively checking things, which is both a distraction and a temptation to get pulled into looking more deeply at anything I spot.

It's going to feel oddly quiet and empty to not have these email messages coming in and these fidgets around, but I think it's going to be good for me. If nothing else, it's going to be different and that's not a bad thing.

(Completely disconnecting from work would be ideal but it's not possible while my email remains entangled and, as mentioned, I still don't feel energetic enough to tackle everything involved in starting to fix that.)

HacksForBetterBreaks written at 21:43:35; Add Comment

2017-04-14

Sometimes laziness doesn't pay off

My office workstation has been throwing out complaints about some of its disks for some time, which I have been quietly clearing up rather than replace the disks. This isn't because these are generally good disks; in fact they're some Seagate 1TB drives which we haven't had the best of luck with. I was just some combination of too lazy to tear my office machine apart to replace a drive and too parsimonious with hardware to replace a disk drive before it failed.

(Working in a place with essentially no hardware budget is a great way to pick up this reflex of hardware parsimony. Note that I do not necessarily claim that it's a good reflex, and in some ways it's a quite inefficient one.)

Recently things got a bit more extreme, when one disk went from nothing to suddenly reporting 92 new 'Offline uncorrectable sector' errors (along with 'Currently unreadable (pending) sectors', which seems to travel with offline uncorrectable sectors). I looked at this, thought that maybe this was a sign that I should replace the disk, but then decided to be lazy; rather than go through the hassle of a disk replacement, I cleared all the errors in the usual way. Sure, the disk was probably going to fail, but it's in a mirror and when it actually did fail I could swap it out right away.

(I actually have a pair of disks sitting on my desk just waiting to be swapped in in place of the current pair. I think I've had them set aside for this for about a year.)

Well, talking about that idea, let's go to Twitter:

@thatcks: I guess I really should have just replaced that drive in my office workstation when it reported 92 Offline uncorrectable sectors.

@thatcks: At 5:15pm, I'm just going to hope that the other side of the mirrored pair survives until Monday. (And insure I have pretty full backups.)

Yeah, I had kind of been assuming that the disk would fail at some convenient time, like during a workday when I wasn't doing anything important. There are probably worse times for my drive to fail than right in front of me at 5:15 pm immediately before a long weekend, especially when I have a bike ride that evening that I want to go to, but I can't think of many that are more annoying.

(The annoyance is in knowing that I could swap the drive on the spot, if I was willing to miss the bike ride. I picked the bike ride, and a long weekend is just short enough that I'm not going to come in in the middle of it to swap the drive.)

I have somewhat of a habit of being lazy about this sort of thing. Usually I get away with it, which of course only encourages me to keep on being lazy and do it more. Then some day things blow up in my face, because laziness doesn't always pay off. I need to be better about getting myself to do those annoying tasks sooner or later rather than putting them off until I have no choice about it.

(At the same time strategic laziness is important, so important that it can be called 'prioritization'. You usually can't possibly give everything complete attention, time, and resources, so you need to know when to cut some nominal corners. This usually shows up in security, because there are usually an infinite number of things that you could be doing to make your environment just a bit more secure. You have to stop somewhere.)

LazinessSometimesBackfires written at 00:40:00; Add Comment

2017-04-12

Generating good modern self-signed TLS certificates in today's world

Once upon a time, generating decently good self-signed certificates for a host with OpenSSL was reasonably straightforward, especially if you didn't know about some relevant nominal standards. The certificate's Subject name field is a standard field with standard components, so OpenSSL would prompt you for all of them, including the Common Name (CN) that you'd put the hostname in. Then things changed and in modern TLS, you really want to put the hostname in the Subject Alternative Name field. SubjectAltName is an extension, and because it's an extension 'openssl req' will not prompt you to fill it in.

(The other thing is that you need to remember to specify -sha256 as one of the arguments; otherwise 'openssl req' will use SHA1 and various things will be unhappy with your certificate. Not all examples you can find on the Internet use '-sha256', so watch out.)

You can get 'openssl req' to create a self-signed certificate with a SAN, but since OpenSSL won't prompt for this you must use an OpenSSL configuration file to specify everything about the certificate, including the hostname(s). This is somewhat intricate, even if it turns out to be possibly to do this more or less through the command line with suitably complicated incantations. I particularly admire the use of the shell's usually obscure '<(...)' idiom.

Given how painful this is, what we really need is a better tool to create self-signed certificates and fortunately for me, it turns out that there is just what I need sitting around in the Go source code as generate_cert.go. Grab this file, copy it to a directory, then:

$ go build generate_cert.go
$ ./generate_cert --host www.example.com --duration 17520h
2017/04/11 23:51:21 written cert.pem
2017/04/11 23:51:21 written key.pem

This generates exactly the sort of modern self-signed certificate that I want; it uses SHA256, it has a 2048-bit RSA key (by default), and it's got SubjectAltName(s). You can use it to generate ECDSA based certificates if you're feeling bold.

Note that this generates a certificate without a CN. Since there are real CN-less certificates out there in the wild issued by real Certificate Authorities (including the one for this site), not having a CN should work fine with web browsers and most software, but you may run into some software that is unhappy with this. If so, it's only a small modification to add a CN value.

(You could make a rather more elaborate version of generate_cert.go with various additional useful options, and perhaps someone has already done so. I have so far resisted the temptation to start changing it myself.)

A rather more elaborate but more complete looking alternative is Cloudflare's CFSSL toolkit. CFSSL can generate self-signed certificates, good modern CSRs, and sign certificates with your own private CA certificate, which covers everything I can think of. But it has the drawback that you need to feed it JSON (even if you generate the JSON on the fly) and then turn its JSON output into regular .pem files with one of its included programs.

For basic, more or less junk self-signed certificates, generate_cert is the simple way to go. For instance my sinkhole SMTP server now uses one of these certs; SMTP senders don't care about details like good O values in your SMTP certificates, and even if they did in general spammers probably don't. If I was generating more proper self-signed certificates, one where people might see them in a browser or something, I would probably use CFSSL.

(Although if I only needed certificates with a constant Subject name, the lazy way to go would be to hardcode everything in a version of generate_cert and then just crank out a mass of self-signed certificates without having to deal with JSON.)

PS: We might someday want self-signed certificates with relatively proper O values and so on, for purely internal hosts that live in our own internal DNS zones. Updated TLS certificates for IPMI web interfaces are one potential case that comes to mind.

PPS: It's entirely possible that there's a better command line tool for this out there that I haven't stumbled over yet. Certainly this feels like a wheel that people must have reinvented several times; I almost started writing something myself before finding generate_cert.

MakingModernSelfSignedSSLCerts written at 00:23:59; Add Comment

2017-04-08

Doing things the clever way in Exim ACLs by exploiting ACL message variables

Someone recently brought a problem to the Exim mailing list where, as we originally understood it, they wanted to reject messages at SMTP time if they had a certain sender, went to certain recipients, and had a specific message in their Subject:. This is actually a little bit difficult to do straightforwardly in Exim because of the recipients condition.

In order to check the Subject: header, your ACL condition must run in the DATA phase (which is the earliest that the message headers are available). If you don't need to check the recipients, this is straightforward and you get something like this:

deny
   senders = <address list>
   condition = ${if match{$h_subject:}{Commit}}
   message = Prohibited commit message

The problem is in matching against the recipients. By the DATA phase there may be multiple recipients, so Exim doesn't offer any simple condition to match against them (the recipients ACL condition is valid only in the RCPT TO ACL, although Exim's current documentation doesn't make this clear). Exim exposes the entire accepted recipients list as $recipients, but you have to write a matching expression for this yourself and it's not completely trivial.

Fortunately there is a straightforward way around this: we can do our matching in stages and then accumulate and communicate our match results through ACL message variables. So if we want to match recipient addresses, we do that in the RCPT TO ACL in a warn ACL stanza whose only purpose is providing us a place to set an ACL variable:

warn
  recipients = <address list>
  set acl_m0_nocommit = 1

(After all, it's easy to match the recipient address against things in the RCPT TO ACL, because that's a large part of its purpose.)

Then in our DATA phase ACL we can easily match against $acl_m0_nocommit being set to 1. If we're being extra-careful we'll explicitly set $acl_m0_nocommit to 0 in our MAIL FROM ACL, although in practice you'll probably never run into a case where this matters.

Another example of communicating things from RCPT TO to DATA ACLs is in how we do milter-based spam rejection. Because DATA time rejection applies to all recipients and not all of our users have opted in to the same level of server side spam filtering, we accumulate a list of everyone's spam rejection level in the RCPT TO ACLs, then work out the minimum level in the DATA ACLs. This is discussed in somewhat more detail in the sidebar here.

In general ACL message variables can be used for all sorts of communication across ACL stanzas, both between different ACLs and even within the same ACL. As I sort of mentioned in how we do MIME attachment type logging with Exim, our rejection of certain sorts of attachments is done by recording the attachment type information into an ACL message variable and then reusing it repeatedly in later stanzas. So we have something like this:

warn
  # exists just to set our ACL variable
  [...]
  set acl_m1_astatus = ${run [...]}

deny
  condition = ${if match{$acl_m1_astatus} {\N (zip|rar) exts:.* .(exe|js|wsf)\N} }
  message = ....

deny
  condition = ${if match{$acl_m1_astatus} {\N MIME file ext: .(exe|js|bat|com)\N} }
  message = ....

deny
  condition = ${if match{$acl_m1_astatus} {\N; zip exts: .zip; inner zip exts: .doc\N} }
  message = ....

[...]

(Note that these conditions are simplified and shortened from our real versions.)

None of this is surprising. Exim's ACL message variables are variables, and so you can use them for communicating between different chunks of code just as you do in any other programming language. You just have to think of Exim ACLs and ACL stanzas as being a programming language and thus being something that you can write code in. Admittedly it's a peculiar programming language, but then much of Exim is this way.

EximMultiStageACLMatching written at 23:41:04; Add Comment

(Previous 10 or go back to March 2017 at 2017/03/31)

Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.