I got some interesting feedback from someone in the Risk Management group on one of my Performance Test reports. Like any good Performance Test report, it compared the new version of the application with the previous version.
He said that he has seen a lot of cases where, over several versions of a piece of software, transaction times or resource utilisation has increased. The differences between versions are too small to worry about but, cumulatively, they may cause the performance of the software to be quite different to the original version.
As a consultant specialising in Load Testing, I generally don’t hang around for the full lifecycle of the software (ie the maintanence phase), so this is not something I had come across before. Perhaps it’s blindingly obvious to everyone else :)
This type of information will always be available in past reports, but whoever is signing off the test report is not likely to think to look outside of the document they are currently reviewing.
Unfortunately it is not always easy to provide a good comparison to an early version. Your mix of test cases may change, or transaction definitions may change with business requirements or as you gain better knowledge of the application. And there is limited usefulness in a comparison unless you are comparing like with like.
It’s always interesting to hear a different perspective…