Testing in the face of
Suppose that you have a program that runs another program and uses its output; for example, you have a high level analysis program that runs a low level state gathering command (or several commands) to gather various information. Here is something that I have recently learned the hard way:
You really want to give your high level program an option to get this status output from a file instead of by running the commands.
Doing this saves you from having to recreate a specific scenario each time you want to test how your high level logic handles the situation. Instead, you reproduce each scenario once, save the output of the low level state gathering tools, and test your program offline. Speaking from personal experience, this avoids a lot of tedium and makes it entirely sure that you're re-testing exactly what you think you're re-testing, not a slightly different scenario.
There are two things to watch out for with this approach. Obviously, this only really works when you have the output of the low level tools nailed down; it's fairly pointless if you're still determining what information they have to output and what format it needs to be in. Second, you need to be sure that your low level tools really always produce the same information (and in the same format) on all of the systems you're going to run your high level program on.
(I have run into some cases where this wasn't so, even when I wrote the low level tools myself, because some systems just didn't provide some bits of information or reported the same scenario in different ways. Of course, this is unpleasant to find out at any point in development.)