Wandering Thoughts archives

2016-09-18

A little shift in malware packaging that I got to watch

When we started rejecting email with certain sorts of malware in it, almost all of the malware (really ransomware) had a pretty consistent signature; it came as a ZIP archive (rarely a RAR archive) with a single bad file type in it. We could easily write a narrowly tailored rule that rejected an archive with a single .js, .jse, .wsf, and so on file in it. Even when we didn't have such a rule ourselves, it seems that our commercial anti-spam system probably had one itself and so rejected the message.

Of course, nothing stands still in the malware world. A bit later, we saw some ransomware send messages that had two .js files in them (or at least I assume it was ransomware). I extended our rejection rules to reject these too and didn't think much of it; at the time it just seemed like one of the random things that spam and malware and ransomware is always doing.

Fast forward to this past Thursday, when we got hit by a small blizzard of ransomware that was still a single bad file type in a ZIP but this time it was throwing in an extra file. What made the extra file stand out is that the ransomware wasn't giving it any sort of file extension. Based on some temporary additional logging (and a sample or two that I caught), the file names are basic, made up, and actually pretty obviously suspicious; I saw one that was a single letter and another that was entirely some number of spaces.

I assume that this evolution is happening because malware authors have noticed that anti-spam software has latched on to the rather distinctive 'single bad file in ZIP' pattern they initially had. I'm not sure why they used such odd (and distinctive, and suspicious) additional filenames, but perhaps the ransomware authors wanted to make it as unlikely as possible that people would get distracted from clicking on the all-important .js or .jse or whatever file.

(I now expect things here to evolve again, although I have no idea where to. Files with more meaningful names? More files? Who knows.)

spam/MalwarePackagingShift written at 01:47:09; Add Comment

2016-09-17

What encoding the syslog module uses in Python 3

Suppose that you're writing a Python 3 program that is going to syslog() some information through the syslog module. Given that one of the cardinal rules of Python 3 is that you should explicitly consider encoding issues any time you send data to the outside world and that syslog() definitely does this, the immediate questions to ask are how syslog handles encoding the Unicode string you're giving it and if it can ever raise a Unicode encoding error.

(Anything that can raise a Unicode encoding error on output needs to be carefully guarded so that it doesn't blow up your program some day in an obscure corner case. It would suck to have your entire program abort with an uncaught exception as it tried to syslog() some little monitoring message that wound up with peculiar content this time.)

The answer turns out to be that the syslog module is effectively hard-coded to use UTF-8. In particular, it does not use whatever Python thinks is your system's default encoding (or default filesystem encoding). I believe this means that syslog.syslog() can never raise a Unicode encoding error.

This may be documented somewhere by implication, but if so I couldn't find it in the Python 3(.5.2) documentation.

(As an occasional system programmer, I worked this out by reading the CPython source code.)

This issue is in the same general area of concern as PEP 383, but isn't really dealing with the same issue since the syslog module only outputs data to the outside world. Note that as far as I can see, syslog.syslog() (and in fact any code in CPython that's like it) doesn't use the special "surrogateescape" encoding mechanism introduced in PEP 383. If you take stuff in from the outside world that winds up escaped this way and try to syslog it out, you will not get the exact initial bytes syslog'd; instead you get a conventional UTF-8 encoding of it.

Sidebar: How this works in the CPython source code

The syslog module is written in C. It turns the message into a C string by calling _PyUnicode_AsString, which is really PyUnicode_AsUTF8. This uses the encoding you expect it to given its name.

This implies that anything in the CPython source code that's turning a Python string into a C string through this function is actually using UTF-8. There seem to be a decent number of them, although I haven't looked in detail. This doesn't particularly surprise me, as it seems that CPython has made an executive decision here that UTF-8 will be the encoding for a number of cases where it needs to pick some encoding (ideally one that will never produce errors).

python/Python3SyslogEncoding written at 00:17:35; Add Comment

2016-09-16

A shell thing: globbing operators versus expansion operators

If you've been using a Unix shell for long, you may be familiar with the '[...]' wildcard, which can be used to match a character range (or a bunch of characters):

ls -lt logfile.[1-5].gz

If you've used Bash or a number of other shells for a while, you may also be familiar with '{..,...}':

touch afile.{one,two,three}

There is an inconvenient chasm here between these two very similar things. Wait, a chasm? Sure. Imagine that you want to create afile.1 through afile.5. Can you write this in a nice compact way as the following?

touch afile.[1-5]

The answer is no, and this is the chasm in action. You can use '[1-5]' to match logfile.1.gz through logfile.5.gz, but you can't use it to generate 1 through 5 for touch. Similarly, you can't use {...} as part of a wildcard match, eg:

ls -lt afile.{c,h,go,cpp,py,rb,el}

What is happening here is that modern shells have two sorts of operators, wildcard globbing operators and expansion operators. Expansion operators are simply text substitution and expansion, so 'x.{a,b,c}' expands out to 'x.a x.b x.c' regardless of what files currently exist. Wildcard globbing operators match filenames and only expands out to filenames that match; if nothing at all matches, it's either an error or the operator produces itself as literal text.

(In other words, if you do 'touch nosuchfile.*', you get a file called 'nosuchfile.*'. The operator producing itself is the standard behavior but some shells have an option to make a failed glob into an error.)

The chasm between the two fundamentally exists because the shell can't read your mind about what you want. To return to my earlier example, if you write:

touch afile.[1-5]

and you already have a file called afile.1, do you actually want to update its timestamp and do nothing else, or you want to create afile.2 through afile.5 as well? The shell can't tell, so it must pick one or the other. It is this decision that creates the distinction between wildcard globbing operators and expansion operators.

(Globbing came first, by the way. Expansion operators got added later, although in the end the Bell Labs people decided that having a '{...}' feature was sufficiently useful that an equivalent was included in Tom Duff's rc.)

(This entry was sparked by Advancing in the Bash Shell, via John Arundel, which got me thinking about how '{..}' is kind of weird in the shell.)

unix/ShellGlobVsExpansion written at 01:05:15; Add Comment

2016-09-15

What I did to set up IPv6 on my wireless network

Last month I slapped together a home wireless network in a rush. Or more exactly I put together an IPv4 wireless network; as I mentioned at the end, one of the things I had left to do was extend it to IPv6 as well. Today, for no good reason, I decided that I was going to fix that and get at least basic IPv6 up and running. Even though I don't really know what I'm doing here, I succeeded, at least at the basics.

Bearing in mind that I didn't know what I was doing, I started with IPv6 DHCP and DNS service. I have an IPv6 /64 assigned to me by my ISP so I decided to carve off a section and allocate IPs out it. This is where the first 'I don't know what I'm doing' annoyance came in. I set up DHCP as:

# static IPv6 assignments, then:
# Need a subnet6 declaration to make
# things happy:
subnet6 subnet6 2001:1928:1:7:f000::/68 {
    # empty
}

The ISC DHCP server promptly complained:

No subnet6 declaration for enp7s0 (fe80::[...]).

Well. The reason for this was pretty straightforward; I had not attached a public IPv6 address to my Ethernet interface, leaving it with only a link-local IPv6 address. As far as I know, there's no way to tell ISC DHCP to just do DHCP service on a given device (and associated it with a specific client IP range so that it will give out assigned IPs in that range to known clients); you have to attach a specific IP in the client IP range. So I had to glue an otherwise public IPv6 address on to enp7s0.

(I set the DHCP server to tell clients that their router and their DNS server was my machine's fe80:: link-local address.)

I fired up my test device and got exactly nowhere; IPv6 DHCP was running but nothing was making any requests. People who know IPv6 are probably shaking their heads sadly, because if I'd known more I'd have known that DHCP by itself isn't sufficient. In IPv6, you need a route advertisement daemon to kick off the whole process. I followed section 3 of this guide to set up a radvd configuration that delegated everything to DHCP:

interface enp7s0
{
   AdvSendAdvert on;
   AdvOtherConfigFlag on;
   AdvManagedFlag on;
   prefix 2001:1928:1:7:f000::/68
   {
      AdvOnLink on;
      AdvAutonomous on;
   };
};

(I'm including the full configuration so you can laugh at me, because this is cargo cult configuration at its finest. I have minimal understanding of what I'm doing and I'm almost entirely copying things from elsewhere.)

This worked! My test device now got an IPv6 address and I could see traffic flowing. Or rather, I could see traffic attempting to flow but not actually going anywhere.

When I'd started radvd, it had complained:

IPv6 forwarding setting is: 0, should be 1 or 2
IPv6 forwarding seems to be disabled, but continuing anyway

When I saw this I nodded sagely and went off to /proc/sys/net/ipv6/conf to turn IP forwarding on for the two interfaces involved here (the local network and my DSL PPPoE link), because of course I needed to do this in IPv6 just as in IPv4. Surprise! Linux's IPv6 doesn't quite work like IPv4 does here. As I found in this helpful page, you apparently need to set IPv6 forwarding globally, in sys.net.ipv6.conf.all.forwarding. When I did this, everything suddenly worked.

(I don't know enough right now to understand the description of this in the kernel's ip-sysctl.txt.)

On the whole I was reasonably pleasantly surprised by how relatively easily this was to set up. Bearing in mind that I have no real idea what I'm doing, all of my problems were from not knowing what to configure (and how), not from software issues. And the software even told me some of what I was doing wrong.

I haven't currently set up any sort of IPv6 iptables filtering. After recent fun I think I'm going to avoid touching iptables for a while and just rely on IPv6 address obscurity.

(This would work a bit better if I assigned known clients random IPv6 addresses out of the /68 I set up or something, rather than statically giving them a single fixed IP. But I need to set up IPv6 iptables rules for my main machine anyways, which is probably always going to have a static IPv6 address.)

PS: radvd also complained 'enp7s0 prefix length should be: 64', but I'm ignoring that for now because things seem to work. I'm honestly not sure how prefix lengths are supposed to work when you have a big prefix from your ISP and you're carving bits of it up internally. If it doesn't blow up in my face, I'm letting it be.

linux/QuickWirelessIPv6Setup written at 00:13:08; Add Comment

2016-09-14

When iptables SNAT and routing happens, and how this is annoying

Per this famous iptables tutorial (via), and also this more recent documentation, locally generated IP packets go through multiple processing steps, both in iptables and in other things the kernel does:

  1. packets are given an initial routing, which assigns the source IP among other effects
  2. iptables OUTPUT chain for the raw, mangle and then nat tables
  3. packets are re-routed in case iptables changed something here, although I believe their source IP will never be changed
  4. iptables OUTPUT chain for the (default) filter table
  5. iptables POSTROUTING chain for the mangle and then nat tables
  6. packet is transmitted, at least in a logical sense (I think IPSec magic may happen here)

Actually, my description is not quite accurate. Iptables has two sorts of NAT: SNAT (for the source IP address) and DNAT (for the destination address). The OUTPUT chain's nat table can only do DNAT. SNAT can only be done in POSTROUTING, which happens, well, after routing (and which applies to all packets leaving the machine, not just locally generated ones).

Under normal circumstances this is perfectly fine, because under normal circumstances routing is only affected by the destination address and that's changed by DNAT in the OUTPUT chain, before the second routing pass. However, if you are doing policy based routing you actually do want to make routing decisions based on the source IP and by the time you can change it, it's too late. You must do a two stage change to get the same result, assuming it works.

(I wrote about this long ago, but at the time I hadn't read anything on the processing order and so on. Now I have a motivation, so I'm starting to dig. It's interesting to see my old guess that packets were being routed twice is in fact correct, although it's humbling to know that if I'd just read the tutorial I could have known that back then. My failure to read documentation if I'm bored or irritated is not a new thing.)

I can't confidently assert that this limitation on where SNAT can be used is unnecessary, but it certainly seems that way to me. If SNAT or some other method of changing or forcing the source IP could be used in the OUTPUT chain, life would be simpler and more powerful for policy based routing decisions. You'd force the source IP to whatever you needed and then the second routing pass would do all of the work, with much less possibility of packets going out an interface with the wrong source address attached to them.

(Having to use SNAT here is already vaguely absurd, since we're firing up an entire elaborate netfilter machine for state tracking and address translation when we actually don't need it at all. But I suspect that no one has written an iptables/netfilter module that just changes the source IP without NAT'ing things, and I have to admit that uses for it are a bit obscure. I'm a special case here.)

linux/IptablesWhenSNATAnnoyance written at 01:15:53; Add Comment

2016-09-13

I don't understand Linux iptables NAT as well as I should

I'm currently having a problem with my DSL link where after a restart of the link (such as a power outage), I now can't reach any number of networks over it; unfortunately, quite a lot of these networks are major hosting providers like AWS, Cloudflare, OVH, Google, and so on. Fortunately I can reach the other side of my IPSec tunnel and the other side of my IPSec tunnel can reach all of these networks. For the most part all I want to do that involves these networks is visit web sites hosted on them, so after manually adding routes to force traffic over my IPSec tunnel got tiring I thought that I'd use iptables to just redirect all outgoing HTTP and HTTPS traffic over my IPSec tunnel, as I've done before.

So I set up all of the necessary iptables rules (or what I thought were the necessary rules), I could see my HTTP requests flowing out the right interface with the right address, and nothing actually worked. Whoops. Never mind my browsers, I couldn't even get a simple TCP-level connection to, well, connect. Tcpdump said that the return traffic was coming back but nothing was doing anything with it. At first I thought this was a MTU issue, because unlike the last time I did this, I'm forcing traffic that was originally going out an interface with a higher MTU into one with a smaller one. Some magic iptables invocations to fix this later I realized that it can't possibly be the problem if a plain TCP connection fails to complete.

I will cut to the chase: I've failed to figure out what the problem is with my iptables rules (and my setup). I tried a number of things which I thought might be the problem (turning on forwarding for the IPSec tunnel, for instance, in case the kernel was dropping packets for that reason, and fiddling with the rp_filter setting), and they all failed. I tried directions on turning on packet tracing for iptables debugging and they produced nothing. I looked in /proc/net/nf_conntrack, which was kind of educational but had no enlightenment for me; the best I could see was that my connections never transitioned from SYN_RECV to ESTABLISHED.

(Other NAT activity was and is fine. Well. Relatively fine. It gets NAT'd properly when it can reach things at all.)

All of this is frustrating, but it shows up a larger problem: I don't really know my way around NAT in iptables, and to some extent I don't know my way around iptables at all. Iptables is quite complicated, especially once you add in NAT magic, and while I once dug into it to some extent (cf), I've let that knowledge fade. At one point this was reasonably okay because I was only doing relatively simple things with iptables. Those days are sort of over now and my relative lack of understanding is getting in the way. I need to figure out how to get back up to speed here, even if I know I'm going to yearn for the relative simplicity of OpenBSD's PF.

(Unfortunately it's difficult to feel motivated to dig into NAT right now when the whole DSL routing situation makes me want to set things on fire.)

PS: My views of Linux iptables versus OpenBSD PF are complicated. I will summarize by saying that I have wanted to set both of them on fire at different times.

linux/NATLackOfUnderstanding written at 00:56:54; Add Comment

2016-09-12

A bunch of my sysadmin work seems to be like gardening

Especially, it seems like weeding, or at least how I've read weeding described (I'm not a gardener myself so I have limited experience here). By that I mean that it's plodding and often boring, and involves going through our systems to trim back or pull out entirely various things that were once necessary (well, probably) but that have been neglected and are now at least overgrown and perhaps outright bad. It's painstaking work because I have to make sure that what I'm pulling out isn't going to yank up anything important with it (or suddenly turn out to not be a weed after all, but something important we still need). And there's always more of it in long-lived systems. They accumulate clutter and then the clutter gets overgrown as people's memories about why it's there and what makes it necessary and so on fade away.

(The other part of this is that once you have a workaround for a problem, you usually don't carefully test each new system and OS version and so on to see if the problem is still there, and keep in your documentation a current matrix of what needs it and what doesn't. Once a workaround is in there you often stop thinking about it, which makes it easy for a workaround to become unnecessary.)

Thinking of this sort of work as gardening and weeding makes it a little bit easier for me to keep my enthusiasm up for carefully grinding through a set of tests to verify that probably something can be pulled out. It's just like pulling the weed of your choice; you do it and do it and keep doing it and there's another one hiding in the corner and so on. You have to accept that it's a process and just keep going, bit by bit. And in system administration, some day we can hope to turn around and have this particular weed be all gone.

Sidebar: the weed I'm pulling that inspired this thought

We have a long-standing carefully enforced limitation in our local system to propagate passwords that /etc/group lines can be no more than 512 characters long. Since we have several groups with memberships that normally exceed this, we have some annoying workarounds for this limit. If a 512-byte limit is no longer necessary, we'd like to get rid of it to make our lives simpler.

But, well, this is a lot of careful work to check that nothing is going to break. It's not just programs that directly use /etc/group through the libc routines, either; we have some awk code (somewhere) that processes /etc/group to generate data files for Apache's basic authentication. That probably won't break, and Apache can probably deal with big groups, but I'd better check it all, hadn't I. And go looking for other scripts that process /etc/group to do something, too, of course.

sysadmin/SysadminGardening written at 00:17:40; Add Comment

2016-09-11

We're probably going to see a major Certificate Authority de-trusted

I mentioned WoSign the other day. To be blunt, at least from my perspective things do not look for WoSign. Unlike some CAs they do not seem to have been compromised or acting with actual malice; however, they are behaving extremely sloppily, do not seem to care much about security, and certainly appear to be lying about multiple issues. At this point it seems less like a question of if they should be de-trusted by browsers and more a question of when.

Under normal circumstances, this might be nothing for big concern (at least for most people); to the limited extent that I can find information on this on the Internet, WoSign doesn't seem to be a major CA with lots of TLS certificates issued, especially outside of China. But the problem isn't confined to WoSign, because WoSign owns StartCom, also known to people as StartSSL. Before Let's Encrypt launched, StartSSL was most people's best source of free certificates. They issued a lot of them (and then annoyed a bunch of people by charging for certificate revocations, even in the face of a major incident). Especially given that StartCom has already cross-signed one WoSign CA certificate, there is very little point to de-trusting WoSign without also de-trusting StartCom. And de-trusting StartCom is very likely to have a real impact on a lot of websites.

(Even WoSign themselves don't dispute that they've bought StartCom; they just claim that the transaction hasn't closed yet. And this claim is not in accordance with the facts that Mozilla has obtained.)

One of the problems here is that it is hard to partially revoke trust in a CA, especially if they've already demonstrated they're willing to do things like backdate newly issued certificates. If you say 'trust no certificates issued after <X>', then such a CA will just backdate their certificates; they have very little reason not to and every reason to do so, because for a CA not being able to issue new certificates is a death sentence. CT logs only help somewhat, because they're ultimately an after-the-fact tool that lets you know bad certificates have been issued but doesn't stop the issuing. If you believe that a CA will continue to issue bad certificates, requiring them to be publicly visible is not really a solution even though it helps.

Unfortunately, de-trusting WoSign and StartCom is not something that one browser can do alone in practice. As I've written about before, browsers are engaged in a giant game of CA chicken with each other as far as major CAs are concerned. Users want browsers that work and refusing to trust websites that you want to visit doesn't count as working. If Mozilla de-trusts WoSign and StartCom without support from Chrome and probably Apple and Microsoft as well, the most likely effect is that Firefox loses more users. Is WoSign's conduct sufficiently egregious to also get at least Chrome to drop them? We don't know, and may not for a while. Since there is no clear large danger, things may move slowly here. To put it one way, I expect all the browsers to give WoSign a lot more time to make more excuses.

(I myself am not courageous enough to de-trust StartCom in my Firefox; I expect that that would just be too inconvenient due to StartSSL certificates that I encounter without realizing it. So I can't blame anyone.)

PS: the DigiNotar case demonstrates that browsers can move fast on de-trusting CAs if things are sufficiently bad. But barring a major new issue, things aren't that bad with WoSign, or at least we lack a sufficiently large smoking gun that WoSign can't wave away.

web/WoSignExplosionToCome written at 01:51:41; Add Comment

2016-09-10

Link: Actually using ed

Tom Ryder's Actually using ed (via @davecheney) is a nice little walk-through of using the Unix ed editor to, well, edit some text. I've used ed a long time ago, and this inspired me to fire it up again to follow along and play around a bit; it was surprising how much I remembered. It was also nice reading about some advanced ed features that I either didn't know about or never used enough to remember.

(As always, I find it interesting what's considered basic versus advanced in things like this. To me, address ranges like '1,10p' are a basic feature.)

I'm not sure I'll ever use ed for anything real (at least outside of a real emergency), but it's nice to know that I still remember enough to find my way around in it. It's also nice that the modern GNU version of ed has picked up a number of user-friendly features like the -p argument.

(The version of ed that I'm acclimatized to is usually called 'UofT ed', and features a bunch of modifications made here by a number of people, including Henry Spencer. Among other different behaviors from classical ed, it prompts with a '*' instead of nothing and has somewhat more friendly error messages.)

links/ActuallyUsingEd written at 20:18:45; Add Comment

Some notes on curating the set of CAs that Firefox trusts

In light of recent events involving WoSign (and there's more), I decided to distrust their CA certificates in my Firefox setup. This turned out to be much more involved than I expected, but also much more educational.

This is not the first time I've done this sort of thing, or more exactly this is not the first time I've tried to. And in that little remark is a tale. When I've done this in the past, the CAs that I told Firefox to 'Delete or distrust' in its certificate manager came back the next time I updated my custom Firefox build, and so for years I assumed that this was because Firefox kept this in a way that saw it overwritten on updates. This time around I decided to fix it in my source code, so that WoSign would be gone for good no matter what. While I had some initial success at this (or at least I seemed to), it didn't actually seem to make a different in my real Firefox. Never mind that I'd told my Firefox to delete WoSign and I'd theoretically stripped it out of my Firefox source code, there it was again in Firefox's list of known CAs and their certificates.

It turns out that I was wrong about what was happening, partly because I wasn't paying close enough attention and partly because Firefox has an extremely misleading and terrible UI for this. So let me tell you what's really going on. First, the CAs that I was trying to remove didn't just come back when I updated Firefox. They turn out to come back immediately, as in 'if you close the certificate manager and reopen it, your 'deleted' CAs are back'. The reason I thought the CAs came back when I updated Firefox was simply because that's generally what prompted me to check their status again. Second, although they were back (or never removed), Firefox at least in theory wasn't trusting them.

(It's at least very hard to reliably see the trust status of a CA cert in the certificate manager. It may be impossible unless you know a lot about NSS and what you're doing. And I never attempted to find test HTTPS sites for the CAs I was distrusting.)

Firefox (or more exactly NSS) has an internal collection of built in (root) CA certificates, and this collection can't be altered. When you tell Firefox to 'delete or distrust' such a certificate, what it actually does is copy that certificate to a certificate database in your Firefox profile and then marks it as 'do not trust this certificate'. The certificate database overrides the built in NSS database. So far, so good.

However, nothing in Firefox ever seems to clean up old certificates from your profile's certificate database. Even if NSS itself removes a CA certificate, your Firefox may still have it in the database and if it's there Firefox will show it in the big list of CAs and their certificates. When I explicitly distrusted WoSign through Firefox, of course it immediately got put into my profile's cert database and stayed there even though I really had taken it out of NSS. In fact my profile is rather old and it turns out that over the years, all sorts of CA certs had wound up in my profile's certificate database.

(Although I haven't tested this much, it seems that Firefox will copy CA certificates into your profile's certificate database under any number of circumstances, not just if you explicitly distrust them. I believe the certificate database may also be used to save or cache copies of intermediate certificates that Firefox has seen and validated.)

Since I've never added certificates to Firefox, only distrusted various ones, I wound up dealing with all of this by simply deleting the entire certificate database from my profile. So far this doesn't seem to have had any disastrous consequences (for example, my saved website logins and passwords are intact), and I'm happy to rely on the default state of NSS. Now that I know where this data is in the Firefox source, I can do my distrusting there.

(It turns out that Mozilla knows the certificate manager needs a lot of work (cf) but doesn't consider it a particularly high priority. I half disagree with them and half see where they're coming from.)

PS: Distrusting WoSign CA certificates is more involved than this, because at least some of WoSign's CA certs are cross-signed by StartComm, which WoSign now owns. Removing WoSign's CA certs from NSS doesn't prevent these cross-signed certificates from chaining back up to a valid CA certificate that NSS knows about and still trusts, and I'm not quite ready to distrust StartComm too (although maybe I should).

PPS: My overall conclusion is that this is a mess and I don't really understand it yet. I can do some things, but not enough.

Sidebar: The files involved and some other bits

The certificate database is probably called cert8.db in your profile. It can be examined with NSS's certutil program, and as far as I'm concerned it's best to make a copy of the live file in another directory and point certutil at your copy. If you do this, you'll discover that certutil also needs a copy of your profile's key3.db file.

In the certificate manager, I don't know what the difference is between a 'Builtin Object Token' and a 'Software Security Device'. Some CA certificates use one, some use others. Having peered at the code a little bit, it's possible that a 'Software Security Device' is a CA certificate that is in your profile's certificate database (either because you sneezed on it in the certificate manager or because it's an intermediate certificate that Firefox has seen and cached).

In the full Firefox source code, the NSS CA certificate data file is security/nss/lib/ckfw/builtins/certdata.txt. CA certificates seem to have both a 'Certificate ...' and a 'Trust for ...' bit; I took both out for the CA certs that I removed, just to be sure. There may be a way to keep the certificate but explicitly tell NSS to distrust it.

web/FirefoxCuratingCATrustNotes written at 01:30:44; Add Comment


Page tools: See As Normal.
Search:
Login: Password:
Atom Syndication: Recent Pages, Recent Comments.

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.