Mozilla's Looking Glass 'retrospective' is unfortunately inadequate
You may remember Mozilla's betrayal of Firefox users and its nominal principles when it force-pushed a misleading named and described promotional addon through its SHIELD studies system. Mozilla ate a certain amount of crow at the time and promised a postmortem about the whole thing. They have finally delivered with Retrospective: Looking Glass (via). Unfortunately there are problems (still).
Things go bad right at the start of the retrospective:
In December, we launched a tv show tie-in with Mr. Robot, Looking Glass, that alarmed some people because we didn’t think hard enough about the implications of shipping an add on that had the potential to be both confusing and upsetting. [...]
Either Mozilla did not take their root analysis far enough to understand the core problem or they're still not willing to admit it: when Mozilla force-pushed Looking Glass, they betrayed the trust of Firefox users. The problem is not the addon that they shipped, the problem is that they shipped the addon.
Mozilla's retrospective does admit that they misused the SHIELD system and they have announced new principles to stop them from doing this in the future. But as long as their root problem is not addressed, this is simply blocking one particular mechanism (out of many possible ones) instead of putting an end to the philosophy. I find it not particularly surprising but still depressing that this retrospective does not come very close to addressing the questions I would like Mozilla to be asked, starting with 'do you think you have to right to do this sort of thing without informed consent from users'.
(Perhaps Mozilla thinks the answer to that is obvious and is 'of course we don't'. Well, given Looking Glass, that answer is no longer obvious to people outside Mozilla (at least), so Mozilla should be reaffirming it in public and re-committing themselves to it. As it stands, their silence here on this leaves at least doubts.)
Since Mozilla apparently doesn't understand that people gave them trust and they betrayed that trust, I don't think they can necessarily be trusted in the future. Whether you re-enable SHIELD studies in light of Mozilla's new principles for their use is up to you, but if you do you should do it because you explicitly want to do Mozilla a favour and you're willing to take the risk that Mozilla will 'abuse' the mechanism in the future.
(I put 'abuse' in quotes because Mozilla probably will claim and perhaps honestly think that whatever they do isn't an abuse of their stated principles. That's kind of the problem with Looking Glass; Mozilla demonstrated that they were blind to what they were doing (wilfully or otherwise).)
As far as future questionable Mozilla decisions go, well, I'm not planning on giving Mozilla the benefit of the doubt any more. If they put forward a potentially dubious feature and ask me to trust them that it's a good thing and won't be abused, my new answer is 'no'. As a result, I will be turning off this Pocket stuff and other future things as forcefully as I can manage.
How the IPs sending us malware break down (January 2018 edition)
I recently wrote about how a misbehaving SMTP sender fooled me about some malware volume because it kept retrying the same message over and over despite getting permanent SMTP rejections. This made me interested in getting some numbers on how our malware rejections break down in terms of how many repeats, how many sources, and so on. All of the following figures are for roughly the past four and a half weeks.
The starting figure is that over this time we've done 2,729 rejections for malware and viruses. About 175 of these are have the same sending IP, sender address, destination address, and detected malware as other entries, making it very likely that they're resends. The most active resent message was rejected 97 times (that was the one with an ISO); after that we had one rejected 10 times (with 'CXmail/OleDl-AG'), one 8 times (with 'CXmail/OleDl-AL'), four that were rejected 3 times, and a whole fifty five that were rejected twice.
The resends came from 15 different IPs, including two other mail servers at the university; since these mail servers work properly, the 'resent' messages were actually more or less duplicated messages. It's possible that they originally came from different source IPs. Overall it seems that bad SMTP servers that resend in the face of permanent SMTP rejections are pretty uncommon.
(Since I'm blindly looking at messages across a very wide time range, it's possible that a number of the other 'resends' are really duplicate messages created by long-lived malware with insufficient variety in its sender addresses. Over four weeks, it's certainly possible that such malware would revolve around to targeting some of our addresses a second time.)
These 2,729 rejections came from only 124 different IP addresses (including a number of other mail systems at the university), with much of the volume coming from some very active sources:
649 18.104.22.168 443 22.214.171.124 227 126.96.36.199 210 188.8.131.52 196 184.108.40.206 97 220.127.116.11 [...] 41 18.104.22.168
22.214.171.124/24 is SBL387172 and 126.96.36.199/24 is SBL387171, both listed as 'suspected snowshoe spam' ranges. A few other less active IPs are in the CSS, SBL388761, and SBL383008. Somewhat to my surprise, only 24 of the IPs are currently in the XBL, although many of the earlier senders may have aged out.
(The true volume of malware from these SBL listed IPs is likely
to be clearly higher than this, since some of their email will
have been rejected at the
RCPT TO phase.)
Only eight of the IPs sent us more than one type of malware, and a number of them are other mail systems that are forwarding email to some of our users and thus are aggregating a number of different real sources together. The 188.8.131.52/24 block sent us 'Mal/Phish-A' and 'Troj/Phish-BPZ'; the 184.108.40.206/24 block sent us 'Mal/Phish-A'.
(Since these were detected as malware, they were almost certainly HTML files as attachments, which is the pattern we've seen.)
However, many of the active sources tried to send email to quite a lot of different addresses here, as shown by a top five count:
140 220.127.116.11 99 18.104.22.168 74 22.214.171.124 64 126.96.36.199 61 188.8.131.52
This is basically the pattern I would expect from spam sending operations, which is what these are. Aggregated together, 184.108.40.206/24 tried to send to 203 different addresses and 220.127.116.11/24 tried to send to 110. A lot of the destination addresses were targeted repeatedly in both cases.
With one exception, the most popular sender addresses were random GMail addresses like 'email@example.com'. The exception is 'firstname.lastname@example.org', which was used for 27 messages from 21 different IPs, all trying to deliver a 'CXmail/OleDl-V' malware payload. Overall there were 1,610 different sender addresses, but 1,559 of them were GMail addresses.
(I was going to say that none of these would pass GMail's DMARC policy, but apparently Google blinked on their plans for a strict one. Right now GMail still publishes a 'p=none' DMARC policy that doesn't ask people to reject email that fails to pass DKIM and/or SPF tests.)