Why CA-based SSL is not likely to be replaced any time soon
A whole lot of things have been written this year about better versions of SSL designed to get away from the many practical weaknesses of the current CA-based SSL model (you know, the one where CAs have terrible business practices, get compromised, and get leaned on by governments). Unfortunately, I don't think that any of them are likely to catch on any time soon, because of what is essentially the inverse of the false positives problem.
The reality is that forged SSL certificates are in general a very infrequent occurrence. All (or almost all) of the proposed solutions to them are at least moderately difficult; they involve additional code, additional infrastructure, additional TCP connections during SSL connections, disclosure risks for what you're making SSL connections to, and so on. In short, the proposed solutions add pain. And the reality of the world is that taking on pain (possibly a lot of it) in order to solve a rare problem is not a successful sales pitch anywhere outside of the mathematical side of security and cryptography. Especially, browser vendors are going to be naturally unenthused about anything that makes SSL connections worse in practice (slower, fail more often, etc) in order to deal with a rare occurrence.
The corollary of this is that any realistic replacement for CA-based SSL must be cheap and simple overall. My impression is that the only possible candidate for this is SSL certificate information in DNSSec-signed DNS records. This has the virtue that it needs almost no extra connections or queries, does not require any outside infrastructure, and does not disclose your browsing to third parties. It can also be deployed incrementally.
(It has the drawback that it only works for sites that have a DNSSec trust path to the root. If your TLD has not gone DNSSec yet, you lose even if you're all ready yourself.)
My thoughts on the mockist versus classicalist testing approaches
To summarize aggressively, one of the quiet long-running disputes in the OO test driven community is between classical TDD, where you use real or stub support classes, and mockist TDD, where you use behavior-based mock objects. My guide on this is primarily Martin Fowler (via Jake Goulding). Jake Goulding summarizes the difference as stubs assert state while mocks assert messages (or method calls). I'm mostly a classicalist as far as testing goes for no greater reason than I generally find it easier, but reading Martin Fowler's article started me rethinking my passive attitude on this. On the whole I think I'm going to remain a classicalist, but I want to run down why.
One way to look at the divide is to look at what is actually being tested. When you use stubs (or real objects), what you are really testing is the end result of invoking the code under test. When you use mocks, you're testing the path that code under test used to get to its end result. So the real question is whether or not the path the code under test uses to derive its result is actually important.
Phrased this way, the answer is clearly 'sometimes'. The most obvious case is situations where the calls to other objects create real side effects; for example, exactly how you debit and credit accounts matters significantly, not just that every account winds up with the right totals in the end. This means means that sometimes you should use mocks. If you're testing with stubs and the path is important, you're not actually testing the full specification of your code; the specification is not just 'it gets proper results', the specification is 'it gets proper results in the following way'.
At the same time I feel that you should not test with mocks unless the specific behavior actually is part of the specification of the thing under test. Otherwise what you are actually testing is not just that the code works but also that it has a specific implementation. I strongly dislike testing specific implementations unless necessary because I've found it to be a great way to unnecessarily break tests when you later change the implementation.
This also ties into what sort of interfaces and interactions your objects have with each other; there's a whole spectrum of closely coupled objects to loosely coupled objects to deliberately isolated objects. Where you have deliberately isolated objects, objects used to create hard and limited interface boundaries, I think you should almost always test the behavior as well as the actual outcome for things that call them (because you want to make sure that the interface is being respected and used in the right way). Conversely, closely coupled objects (where you are only using multiple sorts of objects because it fits the problem best) are a case where I'd almost never test behavior because the split into different objects is essentially an implementation artifact.
(Possibly some or all of this is obvious to experienced practitioners. One of my weaknesses as a programmer is that I learned programing before both OO and the testing boom, and I have never really caught up with either.)