A realization about one bit of test-driven development
One of the standard pieces of instruction for TDD is that when you are about to do some coding you should not just write the tests before the code but you should also run the tests and see them fail before starting on the real code. You can find this cycle described in a lot of places; write test, run test, see the failure, write the code, run the test, see the test pass, feel good (monkey got a pellet, yay). Running tests that you knew were going to fail always struck me as stupidly robotic behavior, so even when I wrote tests before my code (eg, to try out my APIs) I skipped that step.
Recently I came to a realization about why this is actually a sensible thing to do (at least sometimes). The important thing about seeing your test fail first is it verifies that your code change is what made the test pass.
(This is partly a very basic check that your test is at least somewhat correct and partly a check on the code change itself.)
Sometimes this is a stupid thing to verify because it's already clear
and obvious. If you're adding a new doWhatever()
method, one that
didn't exist before, and calling it from the test, then your code change
is clearly responsible for the test succeeding (at least in pretty much
any sane codebase; your mileage may vary if you have complex inheritance
trees or magic metaprogramming that catches undefined methods and so
on).
But not all changes are like that. Sometimes you're making a subtle change deep in the depths of existing code. This is where you most want to verify that the code is behaving as you expect even before you make your modification; in other words, that the tests you expect to fail and that should fail do indeed fail. Because if a test already passes even before your code change, you don't understand the existing code as well as you thought and it's not clear what your change actually does. Maybe it does nothing and is redundant; maybe it does something else entirely than what you thought (if you have good test coverage, it's at least nothing visibly damaging).
(Alternately, your test itself has a problem and isn't actually testing what you think it is.)
There's a spectrum between the two extremes, of course. I'm not sure where most of my code falls on it and I still don't like the robotic nature of routinely running tests that you expect to fail, but this realization has at least given me something to think about.
|
|