Wandering Thoughts archives

2009-10-22

The limits of some anti-spam precautions

In some quarters it is quite popular to do things like refuse email if the sending machine doesn't have valid reverse DNS or doesn't use a valid domain name in EHLO (or HELO). It's also popular to tell people that everyone should do this, for various reasons.

(Sometimes it's even popular to grumble about how all of the laxness of mailers about this sort of stuff has helped enable the spam epidemic.)

Setting aside all of the other reasons why these things may not be a good idea, it is worth pointing out that the only reason that these precautions work now is that not very many MTAs are using them. In much the same way that spammers once used invalid domains in the envelope sender address and now almost never do (because large MTAs started checking that), spammers are perfectly capable of adopting to use valid EHLO names and to only sending from machines with valid reverse DNS, if they actually need to. Indeed, the fact that the spammers don't bother to do any of this is a strong sign that only an insignificant number of MTAs use such precautions today.

(The history of bad domains in MAIL FROMs is a great example of this, in fact. It used to be a great way to get rid of a bunch of spam, until places like AOL (which was then an important spam target) started doing it. The next thing you knew, spammers were using real domains. I wouldn't be surprised if spammers adopted faster than real domains to the new reality.)

Or in short: spammers are lazy, not stupid (at least in the aggregate).

The corollary is that if you find an anti-spam heuristic like this that works for your email, you should not try to get other people to adopt it. The worst thing you could possibly do for your spam load is to persuade a significant number of MTAs to get more picky in what they accept.

(There is probably already an aphorism somewhere that says 'any widely adopted anti-spam measure will be actively defeated by spammers if at all possible'.)

spam/AntiSpamHeuristicLimits written at 23:57:20; Add Comment

How to waste lots of CPU time checking for module updates

I understand that Python does not have automatic reloading of changed modules and Python source files, and that some people would like to have this feature for their long-running programs and systems. However, there is a right way and a wrong way to do this, especially for things like web frameworks.

The right way is to check for updates (and reload any affected modules) immediately before you process the next request (specifically, when you wake up knowing that you have a request, you immediately check for pending reloads, carry them out, and then go on to process the new request). The wrong way is to check every so often, even if you haven't received any requests; the especially wrong way is to check frequently, say once a second. Things like stat() system calls are cheap, but they are not free; doing fifty to a hundred of them every second adds up over the lifetime of a long running, otherwise generally inactive process.

(This is especially true in our environment, because those stat() system calls periodically require the server in question to talk to the fileserver over NFS. They might be a lot cheaper on a local filesystem, but your code shouldn't be assuming a local filesystem or, for that matter, an otherwise idle system.)

The other reason that frequent wakeups to check for this sort of stuff should be avoided these days is virtualization. If you are running inside a virtualized machine, waking up frequently forces the entire virtual machine to wake up frequently, which is very definitely not free. People making heavy use of virtualization are much happier if virtual machines do not wake up when they are otherwise idle.

(In a desktop application, frequent wakeups may stop a laptop from going to low-power sleep modes and even force undesirable disk IO, but this is not usually an issue for a web application.)

(I can't conclusively identify what Python framework this was, so I'm not going to name any names. Besides, it might have been some configuration option or optional module that really did the damage.)

Sidebar: the case for doing preemptive, periodic reloads

There are two potential justifications for periodic reloads instead of 'just before processing a request' ones. First, you don't delay handling requests by checking for and processing reloads (hopefully they will have been done beforehand), and second, you find out if there is an error before you're trying to process a real request. I'm not convinced that these are strong reasons, especially the second one; I'm not sure that Python exposes very many useful controls for handling module reload failures, although I haven't looked into this very hard (yes, I know this is a dangerous thing to write in a blog entry).

(There are at least two options for how you want to behave on module reload failures; you might retain the last good version of the module, or you might stop further processing until you can successfully import the module. The extreme version of the latter is just to exit.)

python/WrongWayUpdateChecks written at 01:53:53; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.