I got some interesting feedback from someone in the Risk Management group on one of my Performance Test reports. Like any good Performance Test report, it compared the new version of the application with the previous version.

He said that he has seen a lot of cases where, over several versions of a piece of software, transaction times or resource utilisation has increased. The differences between versions are too small to worry about but, cumulatively, they may cause the performance of the software to be quite different to the original version.

As a consultant specialising in Load Testing, I generally don’t hang around for the full lifecycle of the software (ie the maintanence phase), so this is not something I had come across before. Perhaps it’s blindingly obvious to everyone else 🙂

This type of information will always be available in past reports, but whoever is signing off the test report is not likely to think to look outside of the document they are currently reviewing.

Unfortunately it is not always easy to provide a good comparison to an early version. Your mix of test cases may change, or transaction definitions may change with business requirements or as you gain better knowledge of the application. And there is limited usefulness in a comparison unless you are comparing like with like.

It’s always interesting to hear a different perspective…

 

Published On: June 11, 2004Tags:

3 Comments

  1. Simon Norris May 23, 2007 at 10:57 pm

    I was brought into my current role as a permanent test analyst, in place of random consultants brought in on an infrequent basis ‘just to test’. Because I have been here now for three years, I have the ability to compare not just two releases, but up to 12 releases (quarterly releases for three years). As you correctly say, the ‘creep’ isn’t huge between releases, but going back historically can show graphs that look more like Vusers ramping up than actual performance graphs.

    To cover the ‘like for like’ comparison you describe, what we do is ‘overlap’ performance test profiles. We refresh the work profile on a yearly basis, so every fourth release is actually made up of two performance tests. One to compare the current release with previous releases, then the second is to run the current workload seen in the production environment.

    While this means that every year I get twice as much work to do, it does mean that we can follow the course of the application throughout its entire life.

  2. Danny January 7, 2009 at 11:24 pm

    Do you have any sample of CPC exam for 2009

  3. Aarabhi August 28, 2013 at 11:52 am

    Hi Stuart,

    Thank you, this website helps me with many queries like a solution repository.

    I was interviewed with this scenario once.
    It is for one of the customer care application making web services calls from the website.
    The UI is loaded with multiple options for the executive to raise a ticket.
    The application is very slow manually and through the script it is taking not more than the defined SLA. The script is pretty straight forward and making the right service calls.
    The client needs to highlight the page rendering time for every page/transaction.
    Could you please suggest, how to collect the page rendering time for transactions.

    Regards

Comments are closed.