I don’t ever like to feel that I am wasting a client’s time by doing unnecessary load tests, but a long time ago I had an experience that altered my definition of what a necessary test is.
I had come to this project after the scope had been decided and the Detailed Test Plan had been written. The overall system was made up of four applications; privately I queried why we were bothering to test ApplicationX, which was an exceedingly low volume application.
ApplicationX was not business-critical or customer-facing. It was performing acceptably with a production-sized dataset and a small number of test users. Its functionality was unlikely to put significant load on the system, and there would be a maximum of 6 concurrent users in Production. Not load testing ApplicationX seemed like a perfect way to save some time.
Fortunately the Test Manager wanted to stick with the original plan (rather than having to get sign-off on an amended plan), because I was very very very wrong.
Here’s a graph of what happened as I slowly ramped up my virtual users…
I had never seen a web application fall over under a load of only two hits per second before. I didn’t think it was possible. My grandmother could handle two hits per second creating HTTP headers on a typewriter and sending the packets via carrier pigeon.
The worst part of this was that ApplicationX was running in the same JVM as an application that was business critical, so when it went down, it brought the other system down with it. A severity-3 defect suddenly became a severity-1 “oh my god, we can’t do business” defect.
So, the lesson I took away from that experience was that even if an application is not necessarily a candidate for load testing, but it shares infrastructure components with an application that is business-critical, then it is a good idea to consider at least some rudimentary sociability testing under load to shake out any really nasty problems like this one.