Mozilla, Symantec, SHA-1 certificates, and the balance of power
In theory, all CAs are supposed to have stopped issuing SHA-1 certificates on January 1st. In Payment Processors Still Using Weak Crypto (via), Mozilla has now announced that they will allow Symantec to issue a limited number of SHA-1 certificates. The reactions I've seen are reasonably harsh. While I don't entirely disagree, I have an additional cynical perspective that's based on the balance of power between CAs and browsers.
Let us be blunt here: Symantec wants to issue these certificates. They are undoubtedly getting to charge a large amount of money for them (especially under the circumstances) and we have plenty of evidence that many CAs do not care one whit about phasing out SHA-1. Browser demands to the contrary are an irritating distraction from the important business of taking money for random numbers.
Browser people are caught in a difficult three-way situation. On the one hand, they hold significant power given the only purpose of most TLS certificates is to display a lock in the browser. On the other hand, browsers are generally commodities themselves. If a browser stops working for you, which includes 'letting you browse HTTPS sites that you want to browse', most people are going to switch to one that does work. The result is that browsers are engaged in a giant game of CA chicken with each other, much like XHTML chicken and DRM chicken. If all browsers remove a popular CA for violating policies, all is fine. But if one or more browsers blink and do not do so, the remaining strict browsers lose; some decent amount of their users will find important sites not working and move one of the browsers that still include the CA. So if you are a CA, you actually hold a fair amount of power over browser vendors provided that you can deal with them in isolation. Finally, on the gripping hand I think that many browser people genuinely want to do what's right, which includes not screwing people in various ways, especially over risks that are (unfortunately) theoretical at the moment.
If Mozilla were to take a hard line here but no other browser were to do so, it feels likely that Symantec would issue those SHA-1 certificates anyways. If Mozilla were to make Firefox stop trusting Symantec certificates, a lot of people would switch away from Firefox (and it doesn't have a huge user base any more). If Mozilla didn't, its threats to do so in the future to misbehaving CAs would be much less credible. So it comes down to whether other browsers will pull Symantec's root certificates over this. Will they? I suspect not, although we'll find out soon enough.
(For the record: I don't think that the Mozilla people involved made the decision they did because of fear of this happening. I'm sure they're sincere in their desire to do the right thing, and I'm sure the harm to various people of Symantec not issuing these certificates weighed on their minds. But I can't see this situation and not think of the balance of power behind the scenes and would probably happen if Mozilla's decision had gone differently.)
I'm often an iterative and experimental programmer
I've been doing a significant amount of programming lately (for a good cause), and in the process I've been reminded that I'm often fundamentally an iterative and explorative programmer. By this I mean that I flail around a lot as I'm developing something.
In theory, the platonic ideal programmer plans ahead. They may not write more than they need now, but what they do write is considered and carefully structured. They think about the right data structures and code flow before they start typing (or at the latest as they start typing) and their code is reasonably solid from the start.
I can work this way when I understand the problem domain I'm tackling and how I want to approach it well enough to know in advance what's probably going to work and how I want to approach it. This works even (or especially) for what people sometimes consider relatively complicated cases, like recursive descent parsers. But put me in a situation where I don't know in advance what's going to work and roughly how it's all going to come out, and things get messy fast.
In a situation of uncertainty my approach is not to proceed cautiously and carefully, but instead to bang something ramshackle together to get experience and figure out if I can get an idea to work at all. My first pass code is often ugly and hacky and almost entirely the wrong structure (and often contains major duplication or badly split functionality). Bits and pieces of it evolve as I work away, with periodic cleanup passes that usually happen after I get some piece of functionality fully working and decide that now is a good time to deal with some of the mess. Entire approaches and user interfaces can be gutted and replaced with things that are clearly better ideas once I have a better understanding of the problem; entire features can sadly disappear because I realize that in retrospect they're bad ideas, or just unnecessary.
(It's very common for me to get something working and then immediately gut the working code to rebuild it in a much more sensible manner. I have an idea, I establish that the idea can actually be implemented, and then I slow down to figure out how the implementation should be structured and where bits and pieces of it actually belong.)
Eventually I'll wind up with a solid idea of what I want from my program (or code) and a solid understanding of what it takes to get there. This is the point where I feel I can actually write solid, good code. If I'm lucky I have the time to do so and it's not too difficult to transmogrify what remains of the first approach into this. If I'm unlucky, well, sometimes I've done a ground up rewrite and sometimes I've just waited for the next time I tackle a similar problem.