Chris's Wiki :: blog/sysadmin/CheckForChangeEffects Commentshttps://utcc.utoronto.ca/~cks/space/blog/sysadmin/CheckForChangeEffects?atomcommentsDWiki2012-11-28T09:33:21ZRecent comments in Chris's Wiki :: blog/sysadmin/CheckForChangeEffects.From 90.155.35.116 on /blog/sysadmin/CheckForChangeEffectstag:CSpace:blog/sysadmin/CheckForChangeEffects:cc1efd42cd3599e2dfbe144c61b488bf68ef6a5eFrom 90.155.35.116<div class="wikitext"><p>You're never going to remember to run this, and some changes are safe enough, or perhaps unrelated enough that you won't think to run these tests.</p>
<p>Better yet would be to set something up to run load tests at off hours and graph the results (and/or just monitor performance with current natural load). Then when you have a regression you can look at the graph and say "It started happening on Friday the 13th. What did we change then?". You can also put alerts on performance dropping too far for too long so you can discover that you have a performance regression you didn't know about.</p>
<p>Obviously you need to figure out if you're graphing the right thing as you've pointed out previously (eg, 95% latency vs mean latency).</p>
<p>-- Perry Lorier</p>
</div>2012-11-28T09:33:21Z