2009-04-27
One of my TDD weaknesses: mock objects for complex objects
I am not really tuned in to test driven development yet, although I've written some programs that way (and found them quite valuable). As a result, I have some weaknesses, some TDD things that I just haven't acclimatized to yet. One of them is mock objects, specifically mock versions of complex objects; I don't like them and thus I don't use them. In trying to understand my dislike of them, I think that they seem too complex and fragile for at least two reasons.
First, how do I know that I've got the behavior of the mock objects correct? By definition, the complex objects generally have complex behavior, which can easily mean that the mock versions also need relatively complex behavior, which opens up the possibility of bugs. Second, how do I know that the mock objects are still behaving the same as the real objects? If I change the behavior of the real ones (complete with changing their unit tests to match), I may or may not remember the mock versions, and I may or may not realize that the change I made should change the mock version's responses in some way.
Both of these make me feel that mock versions of complex objects are fragile. I don't like fragile tests; they're a recipe for problems and frustrations, and at least in my state of TDD awareness, frustrations can easily lead to abandoning tests entirely. My current solution is to use more or less real objects but to have a unittest test ordering that matches the bottom up dependencies of the program. If early unittests fail, I know that it is pointless to go on; higher levels of code would just see a whole series of cascade failures as things malfunction underneath them.
(It is possible that part of my problem here is that I am confusing unit tests with some other sort of testing, functional or integration or end to end tests, as a result of having only an informal exposure to TDD.)
The problems of over-documenting things
There is a certain school of thought in system documentation that believes, to stereotype things, that there is no such thing as being too explicit or having too many examples. Much of Sun's Solaris documentation makes a great example for this school.
Unfortunately, these people are wrong. There is such a thing as too much documentation, because having too much has a number of problems:
- your documentation becomes less and less readable, as the important
things are buried under a flood of examples, cross references,
and low level walkthroughs of how to do everything in sight. All
of this is irrelevant clutter if I am trying to understand your
system.
- your documentation becomes less useful as a reference work, because
it is harder to skim it to extract the useful piece of information
that I need to jog my memory.
- it is potentially insulting to your audience (especially if you are
writing it for a specific local audience), because it implicitly
assumes that the people reading it don't already know all of the
basic things and have to be walked through everything in detail.
(Even if people don't find it actively insulting, they are probably going to assume that your documentation is not aimed at them and they should go find something else.)
In short, belabouring the obvious takes up valuable space and people's limited time, distracts people, and can annoy them. (And that's what writing really detailed documentation is.)
In theory you can get around some of these problems by pushing your detailed examples and so on off to appendices. This avoids some of the problems but it still has the drawback that you are writing extra material, material that in my opinion is mostly pointless.
(This is not to say that examples and being 'obvious' are always bad things; per DocumentationAssumptions, sometimes they're necessary.)