Testing versus extensibility
One of the test driven development mantras that I've heard is 'if you haven't tested it, it doesn't work'. Which leaves me with a question that I don't know the answer to: how do you test an extendible format?
Suppose that you have a data file format (for example, an XML dialect) that allows more or less arbitrary extensions to be embedded in it, and also that you have some agreement on how your program is supposed to handle extension elements that aren't known to it (ignore them, silently pass them through, or whatever). How do you test that your program really can handle random extensions to the base format?
(Clearly this is one of the times when 'write the minimum amount of code to pass the test' is not what you should do.)
I suspect that the pragmatic answer is that you write test cases for all of the various ways and places that extensions can appear, and probably extra unit tests for boundary conditions in your code, and declare that if it works with your test cases, it should work for any random format extension. (Then if you find a counter-example in the field, you add more tests.)
This must be much more of an issue for test suites, which are both much more black box and which can run serious risks of people coding to the tests instead of the specification. Possibly one can exhaustively generate enough random extensions (covering all of the special cases that the specification allows) so that the easiest way to write a program that can pass the test suite is to have it handle extensions right in general.
|
|