Learning something from not testing jQuery 1.8's beta releases
I was reading the jQuery 1.8 release announcement when I ran across this gem:
We don't expect to get any bug reports on this release, since there have been several betas and a release candidate that everyone has had plenty of opportunities to thoroughly test. Ha ha, that joke never gets old. We know that far too many of you wait for a final release before even trying it with your code. [...]
Since we use jQuery a bit and I'm one of those 'didn't test even the release candidate' people I was immediately seized by an urge to justify my inaction. Then I had a realization.
First, the justification. We're not using jQuery in any demanding way, in a situation where we'll notice the improvements in 1.8. Thus we'd be testing a beta or release candidate purely to validate that it's compatible with our code. Unfortunately testing beta code doesn't save us from having to re-test with the actual release; we can't assume that the changes between a beta and the release are harmless. Nor does reporting bugs against the beta really help us since we're not trying to upgrade to 1.8 as fast as possible. This makes testing betas and even release candidates basically a time sink as far as we're concerned.
(If you actively want or need to use the new release then reporting bugs early (against the betas or even the development version) increases the chances that the bugs will be gone in the released version and you can deploy it the moment it passes your tests.)
All of this sounds nice and some of you may be nodding your heads along with me. But as I was planning this entry out I had the realization that what this really reveals is that we have a testing problem. In an environment with good automated tests it should take almost no time and effort to drop a new version of jQuery into a development environment and then run our tests against it to make sure that everything still works. This would make testing betas, release candidates, or even current development snapshots something that could be done casually, more or less at the snap of our fingers. That it isn't this trivial and that I'm talking seriously about the time cost of testing a new version of jQuery is a bad sign.
What's really going on is that I haven't built any sort of automated testing for the browser view of the web app (well, actually, I haven't built tests for any of it, but especially the browser view of things). This means that testing a new version of jQuery requires going through a bunch of interactions and corner case tests in at least one browser, by hand. I effectively did this once, when I was coding all of these JS-using features, but I did it progressively (bit by bit, feature by feature) instead of all at once. And of course I was making sure that my code worked instead of testing that a new version of jQuery is as bug free and compatible as it's expected to be; the former is far more motivating than the latter (which is basically drudge work).
This is a weakness. I'm not sure it's enough of a weakness to be worth spending the time to fix it, though.
(If I was planning to do much more client side JS programming or if this web app was going to undergo significant more development, things might be different. But as it is I don't see much call for either in the moderate future and there's always a lot of claims on a sysadmin's time.)