2019-01-25
A piece of email malware that wanted to make sure we rejected it
Recently our system for logging email attachment type information recorded an interesting attachment:
attachment application/octet-stream; MIME file ext: .ace; zip exts: .exe
The .ace extension is for an old archive file format and today is mostly used by malware, possibly because tools to look inside ACE archives are less common for reasons you can read about on the Wikipedia page (see eg here). We see a certain amount of .ace attachments all of the time, and we've been rejecting them all for some time. However, this attachment is not actually an ACE archive; instead it's a ZIP archive with a single .exe inside it. Single .exes inside ZIP archives are also a pattern we see frequently and we've been rejecting them for even longer than we've been rejecting .ace attachments.
(We knew it was a ZIP archive because it had the right magic signature to be one; we look at basically everything just to see, because ZIP archives can be hiding out under all sorts of extensions. Real ACE archives don't get detected as ZIP archives, especially ones that we can analyze.)
The net result is that regardless of how we interpreted this attachment, we were going to reject it (and we did). I've got to be amused by a spammer who gives us multiple reasons to reject their work, not just a single one.
My obvious theory for what happened here is that the malware spammer got some spam campaigns and processes confused, effectively crossing the wires between an ACE-based campaign and a ZIP-based one. Maybe they run the same campaign with both archive formats to cover all the bases, or maybe they have different campaigns going on at once. Or maybe this is the fault of some spam infrastructure provider. Whatever the cause is, it amuses me.
PS: This turns out to not be the only case of this we've seen in the past year or so. Some of the old ones even had the MIME type of application/zip, so something in the sending infrastructure clearly knew they actually were ZIP archives.
Sidebar: Some details on the message, with an interesting DKIM failure
The message has the usual sort of sender and subject, and a MIME filename of 'Payment Slip.ace'. These days, fake invoices seem to be the going thing. The sending IP is a Digital Ocean server. The message had a DKIM signature but the signature failed validation for the interesting reason of 'invalid - syntax error in public key record'.
You see, the domain the spammers picked to forge is a parked domain, and it has a wildcard TXT record of 'v=spf1 a -all' (with a five minute TTL, which is polite of the domain parker). Wildcard 'nothing is an acceptable sending source' SPF records are not valid DKIM records, but then this domain clearly isn't supposed to generate any email to start with. The domain parker could have been even more thorough by also providing a null MX record, but I'll give them points for trying at least the SPF record.
The malware adding a DKIM signature that could not possibly validate is an interesting touch. Perhaps this is the inevitable end result of Bayesian filtering being applied to spam and then spammers figuring out what people's Bayesian filters are really basing their decisions on.
The Linux kernel's pstore error log capturing system, and ACPI ERST
In response to my entry yesterday on enabling reboot on panic on your servers, a commentator left the succinct suggestion of 'setup pstore'. I had never heard of pstore before, so this sent me searching and what I found is actually quite interesting and surprising, with direct relevance to quite a few of our servers.
Pstore itself is a kernel feature that dates to 2011. It provides
a generic interface to storage that persists across reboots and
gets used to save kernel messages during a crash, as covered in
LWN's Persistent storage for a kernel's "dying breath" and the kernel documentation. Your
kernel very likely has pstore built in and your Linux probably
mounts the pstore filesystem at /sys/fs/pstore
.
(The Ubuntu 16.04 and 18.04 kernels, the CentOS 7 kernel, and the
Fedora kernel all have it built in. If in doubt, check your kernel's
configuration, which is often found in /boot/conf-*
; you're looking
for CONFIG_PSTORE
and associated things.)
By itself, pstore does nothing for you because it needs a chunk of storage that persists across reboots, and that's up to your system to provide in some way. One such source of this storage is in an optional part of ACPI called the Error Record Serialization Table (ERST). Not all machines have an ERST (it's apparently most common in servers), but if you do have one, pstore will probably automatically use it. If you have ERST at all, it will normally show up in the kernel's boot time messages about ACPI:
ACPI: ERST 0x00000000BF7D6000 000230 (v01 DELL PE_SC3 00000000 DELL 00040000)
If pstore is using ERST, you will get some additional kernel messages:
ERST: Error Record Serialization Table (ERST) support is initialized. pstore: using zlib compression pstore: Registered erst as persistent store backend
Some of our servers have ACPI ERST and some of them have crashed,
so out of idle curiosity I went and looked at /sys/fs/pstore
on
all of them. This led to a big surprise, which is that there may
be nothing in your Linux distribution that checks /sys/fs/pstore
to see if there are captured kernel crash logs. Pstore is
persistent storage, and so it does what it says on the can; if
you don't move things out of /sys/fs/pstore
, they stay there,
possibly for a very long time (one of our servers turned out to
have pstore ERST captures from a year ago). This is especially
important because things like ERST only have so much space, so
lingering old crash logs may keep you from saving new ones, ones
that you may discover you very much would like records of.
(The year-old pstore ERST captures are especially ironic because the machine's current incarnation was reinstalled this September, so they are from its previous life as something else entirely, making them completely useless to us.)
Another pstore backend that you may have on some machines is one that uses UEFI variables. Unfortunately, you need to have booted your system using UEFI in order to have access to UEFI services, including UEFI variables (as I found out the hard way once), so even on a UEFI-capable system you may not be able to use this backend because you're still using MBR booting. It's possible that using UEFI variables for pstore is disabled by some Linux distributions, since actually using UEFI variables has caused UEFI BIOS problems in the past.
(This makes it somewhat more of a pity that I failed to migrate to UEFI booting, since I would actually potentially get something out of it on my workstations. Also, although many of our servers are probably UEFI capable, they all use MBR booting today.)
Given that nothing in our Ubuntu 18.04 server installs seems to
notice /sys/fs/pstore
and we have some machines with things in
it, we're probably going to put together some shell scripting of
our own to at least email us if something shows up.
(Additional references: Matthew Garrett's A use for EFI, CoreOS's Collecting
crash logs,
which mentions the need to clear out /sys/fs/pstore
, and abrt's
pstore oops wiki page,
which includes a list of pstore backends.)
PS: The awkward, brute force way to get pstore space is with the ramoops backend, which requires fencing off some section of your RAM from your kernel (it should be RAM that your BIOS won't clear on reboot for whatever reason). This is beyond my enthusiasm level on my machines, despite some recent problems, and I have the impression that ramoops is usually used on embedded ARM hardware where you have little or no other options.