Some malware that sends interesting fake mailing list messages
Now that we're logging MIME attachment type information, we've also started to reject messages at SMTP time if they contain certain stuff that we don't like, such as a ZIP archive with a single .js file in it. One of Exim's little features is that when you reject a message at DATA time, Exim logs the message headers to its rejectlog (or at least a certain amount of them; Exim won't write too much data here). This has let me observe some interesting things.
One of the most striking things I've seen is malware (sending .wsf files in ZIP archives) that seems to be determinedly faking mailing list headers in the messages that it sends. It's actually a really distinct pattern, so let me show you a sample that very clearly illustrates it:
Received: from 8ta-146-52-116.telkomadsl.co.za (8ta-146-52-116.telkomadsl.co.za [184.108.40.206]) by [... us ...] Received: from root by co.za with local (Exim 4.80) (envelope-from <firstname.lastname@example.org>) id QJhwNw-OgPFpd-lZ for [user redacted]; Mon, 01 Aug 2016 15:41:08 +0200 To: <user redacted> Subject: Corrected report X-PHP-Originating-Script: 0:class.phpmailer.php Date: Mon, 01 Aug 2016 15:41:08 +0200 From: "Michale Mcdowell" <Mcdowell.email@example.com> Reply-to: "Michale Mcdowell" <Mcdowell.firstname.lastname@example.org> X-Priority: 3 Sender: <user-5CB@co.za> X-Mailer: Email Sending System X-Complaints-To: email@example.com List-Unsubscribe: <https://www.co.za/app/unsubscribe.php?p=[hex garble]> List-Id: 35830 X-Postmaster-Msgtype: 91601 X-Report-Abuse: <https://www.co.za/app/report_abuse.php?mid=[different hex garble]> [...]
There's all sorts of interesting made up things here. The final Received: header always claims that the message was received locally via Exim from a bounce-* address, for example, and there's the various list related X-* headers. It's almost always the case that the claimed domain name for the website and the sender email address and so on is related to the actual hostname of the sending IP, but as we see here it seems to be obtained by stripping parts off and sometimes gets wild results.
(Not always; we have another such email from
iburst-41-56-16-166.iburst.co.za that used 'www.iburst.co.za' as
the website domain, instead of co.za. Maybe the difference is because
iburst.co.za have A records, but
doesn't. The software involved here might be using that as a cue
for how to look plausible. It's not that 'www.<whatever>' exists,
because www.co.za doesn't.)
Naturally these URLs don't exist on the sites I've checked, and often the sites don't even respond to HTTPS. And everything I've looked at has this consistent pattern of headers and naming. I'm assuming that it's some malware sending software that is trying to make its messages look more legitimate. Since humans almost never look at this sort of header, I assume that the malware's trying to fool automated scanning systems.
Containerization as the necessary end point of deployment automation
It started on Twitter:
@thatcks: I'll say this for containers: containerizing all our services would make it easier to spin up test instances. Building new VMs gets old.
@beamflash: Is Docker's success partly because it greatly improved on the status quo of deployment automation, and not containerisation per se?
Bearing in mind that I'm an outsider here, my view is that greatly improving the status of deployment automation requires something very closely akin to containerization. What you really want to have is a self-contained artifact that can be deployed somewhere, used, and then un-deployed again, with the host machine now reverted to its original state so that it's ready for the next artifact to be deployed to it. It's very important that the un-deployment step be able to reliably and completely remove all traces of your artifact's presence, because this is what's necessary to make the host reusable. If rolling an artifact on to a host can contaminate the host in any meaningful way, you wind up needing to trash and rebuild the host after you're done with the deployment; otherwise you have potentially important divergences between a newly built host and a host that's been in use for a while, divergences that may affect how your deployment artifacts behave.
(This should be unsurprising, because it's the same advantage that package systems have.)
Current software is not really set up to behave this way. It generally assumes that it can (and will) spray bits of itself all over various parts of your filesystem hierarchy, doesn't keep exacting track of every single file it ever touches (even log files and various sorts of temporary files) so it can remove them all again, and so on. It also generally exists in a web of dependencies with other packages that may not do this either. And in general, it's not self contained; instead it's intrinsically entangled with the state of the host system simply because it's using various basic things from the host system (such as the C library, various shared system configuration files, and so on).
If you want roll-on, roll-off artifacts that won't leave traces behind and that aren't entangled in the current state of the host system, you must somehow create and enforce strong isolation; the deployment artifact can't be able to mess up the host and it can't be able to depend on very many specifics of the host's state. As far as I can see, any form of deployment automation that can do this is going to wind up looking a lot like containerization, although the exact details can (and will) vary.
(See also A thought on containerization, isolation, and deployment, which sort of starts with CJ Silverio summarizing what you want .)