Why I want to do full end-to-end performance tests

November 6, 2012

In light of the difficulty of doing real random IO tests, you might ask why we need to do that at all. Micro-benchmarks are easier and can be used to identify specific problems; as a commentator noted, I could use some block IO testing to test for our recent problem. The simple summary of why is that well done end to end performance testing is going to be comprehensive.

The advantage of good end to end performance tests is that they find all problems, even problems that you don't know about and didn't foresee as well as emergent problems that are the result of several layers combining together. By contrast, performance tests of specific components and other micro-benchmarks are more like negative tests; they're great for ruling out specific problems that you can foresee but they do not necessarily test for things that you haven't. A micro-benchmark certainly can find a new problem but it won't necessarily do so; it depends on whether or not the problem manifests symptoms that intersect with the micro-benchmark.

Of course in theory this is true of end to end performance tests too. The advantage of end to end tests is that they have what I'll call a greater testing surface. Because they go from the top of your system to the bottom (just as your real IO does), they touch a lot of components and significant problems with any one of them should manifest in the test results. If it doesn't, either it's not actually significant for your actual production IO load or your end to end tests aren't quite good enough.

Conversely the drawback of end to end tests is that while they may tell you that there's a problem they probably won't tell you where it is; since they touch so many components, you can't necessarily distinguish which component has a problem. That's one of the times when you turn to micro-benchmarks and specific fine-grained tests.

(At this point you can note that that an end to end random IO test is itself a kind of micro-benchmark. This is completely true; random read IO is just one component of our IO load. But I already have tests for sequential IO and I'm not as worried about random write IO and filesystem operations right now.)


Comments on this page:

From 143.48.117.82 at 2012-11-10 10:40:02:

The difficulty in end-to-end performance testing, of course, is that it's vulnerable to the observer effect. There are certain things that are easy to test, like latency; these are often quantifiable without making a measurable performance impact on the production environment you're trying to keep tabs on. Throughput testing, not so much. If you decide you want to periodically DDoS your public-facing website to see how many concurrent clients it's capable of serving, it's going to have negative impacts on your user community.

Written on 06 November 2012.
« Your logs should always include IP addresses (in addition to hostnames)
DTrace: figuring out what you have access to at tracepoints »

Page tools: View Source, View Normal.
Search:
Login: Password:

Last modified: Tue Nov 6 01:03:44 2012
This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.