Some people feel that all permanent SMTP failures are actually temporary
It all started with a routine delivery attempt to my sinkhole SMTP server that I use as a spamtrap:
remote 18.104.22.168:36462 at 2017-07-03 13:30:28 220 This server does not deliver email. EHLO mail.travelshopnews7.com [...] MAIL FROM:<firstname.lastname@example.org> 250 Okay, I'll believe you for now RCPT TO:<redacted@redacted> 250 Okay, I'll believe you for now DATA 354 Send away [...] . <end of data> 554 Rejected with ID 433b5458d9d3e8a93020aca44406d2ec1d8ba82a QUIT 221 Goodbye
That ID is the hash of the whole message and its important envelope
information (including the sending IP). So far, so normal, and these
people stood out in a good way by actually
QUITing instead of just
dropping the connection. But then:
remote 22.214.171.124:39084 at 2017-07-03 13:37:10 220 This server does not deliver email. EHLO mail.travelshopnews7.com [...] 554 Rejected with ID 433b5458d9d3e8a93020aca44406d2ec1d8ba82a
They re-delivered the exact same message again. And again. And
again. In less than 24 hours (up to July 4th at 9:28 am) they did
21 deliveries, despite getting a permanent refusal after each
At that point I got tired of logging repeated deliveries for the
same message and put them in a category of earlier blocks:
remote 126.96.36.199:50676 at 2017-07-04 10:38:10 220 This server does not deliver email. EHLO mail.travelshopnews7.com [...] MAIL FROM:<email@example.com> 550 Bad address RCPT TO:<redacted@redacted> 503 Out of sequence command [...]
You can guess what happened next:
remote 188.8.131.52:34094 at 2017-07-04 11:48:36 220 This server does not deliver email. EHLO mail.travelshopnews7.com [...] MAIL FROM:<firstname.lastname@example.org> 550 Bad address RCPT TO:<redacted@redacted> 503 Out of sequence command [...]
They didn't stop there, of course.
remote 184.108.40.206:53824 at 2017-07-13 15:37:31 220 This server does not deliver email. EHLO mail.travelshopnews7.com [...] MAIL FROM:<email@example.com> 550 Bad address RCPT TO:<redacted@redacted> 503 Out of sequence command [...]
Out of curiosity I switched things over so that I'd capture their message again and it turns out that they're still sending, although they've now switched over to trying to deliver a different message. Apparently they do have some sort of delivery expiry, presumably based purely on the message's age and totally ignoring SMTP status codes.
(As before they're still re-delivering their new message despite the
DATA permanent rejection; so far, it's been two more deliveries
of the exact same message.)
These people are not completely ignoring SMTP status codes, because they know that they didn't deliver the message so they'll try again. Well, I suppose they could be slamming everyone with dozens or hundreds of copies of every message even when the first copy was successfully delivered, but I don't believe they'd be that bad. This may be an optimistic assumption.
(Based on what shows up on www.<domain>, they appear to be running something called 'nuevoMailer v.6.5'. The program's website claims that it's 'a self-hosted email marketing software for managing mailing lists, sending email campaigns and following up with autoresponders and triggers'. I expect that their view of 'managing mailing lists' does not include 'respecting SMTP permanent failures' and is more about, say, conveniently importing massive lists of email addresses through a nice web GUI.)
Link: ZFS Storage Overhead
ZFS Storage Overhead (via) is not quite about what you might think. It's not about, say, the overhead added by ZFS's RAIDZ storage (where there are surprises); instead it's about some interesting low level issues of where space disappears to even in very simple pools. The bit about metaslabs was especially interesting to me. It goes well with Matthew Ahrens' classic ZFS RAIDZ stripe width, or: How I Learned to Stop Worrying and Love RAIDZ, which is endlessly linked and cited for very good reasons.
SELinux's problem of keeping up with general Linux development
Fedora 26 was released on Tuesday, so today I did my usual thing of doing a stock install of it in a virtual machine as a test, to see how it looks and so on. Predictable things ensued with SELinux. In the resulting Twitter conversation, I came to a realization:
It seems possible that the rate of change in what programs legitimately do is higher than the rate at which SELinux policies can be fixed.
Most people who talk about SELinux policy problems, myself included, usually implicitly treat developing SELinux policies as a static thing. If only one could understand the program's behavior well enough one could write a fully correct policy and be done with it, but the problem is that fully understanding program behavior is very hard.
However, this is not actually true. In reality, programs not infrequently change their (legitimate) behavior over time as new versions are developed and released. There are all sorts of ways this can happen; there's new features in the program, changes to how the program itself works, changes in how libraries the program uses work, changes in what libraries the program uses, and so on. When these changes in behavior happen (at whatever level and for whatever reason), the SELinux policies need to be changed to match them in order for things to still work.
In effect, the people developing SELinux policies are in a race with the people developing the actual programs, libraries, and so on. In order to end up with a working set of policies, the SELinux people have to be able to fix them faster than upstream development can break them. It would certainly be nice if the SELinux people can win this race, but I don't think it's at all guaranteed. Certainly with enough churn in enough projects, you could wind up in a situation where the SELinux people simply can't work fast enough to produce a full set of working policies.
As a corollary, this predicts that SELinux should work better in a distribution environment that rigidly limits change in program and library versions than in one that allows relatively wide freedom for changes. If you lock down your release and refuse to change anything unless you absolutely have to, you have a much higher chance of the SELinux policy developers catching up to the (lack of) changes in the rest of the system.
This is a more potentially pessimistic view of SELinux's inherent complexity than I had before. Of course I don't know if SELinux policy development currently is in this kind of race in any important way. It's certainly possible that SELinux policy developers aren't having any problems keeping up with upstream changes, and what's really causing them these problems is the inherent complexity of the job even for a static target.
One answer to this issue is to try to change who does the work. However, for various reasons beyond the scope of this entry, I don't think that having upstreams maintain SELinux policies for their projects is going to work very well even in theory. In practice it's clearly not going to happen (cf) for good reasons. As is traditional in the open source world, the people who care about some issue get to be the ones to do the work to make it happen, and right now SELinux is far from a universal issue.