Performance testers sometimes get hung up on tool-based testing. Admittedly, most of the really interesting problems appear when there are more than a few concurrent users on a multi-user system, but it is important to verify that an application performs acceptably when it is not under load too.
Even if response times seem fine during functional testing, the functional test team may not have thought to look for the effect of large datasets on response times. It’s not just a case of “do we need to put an index on a table somewhere”, but sometimes response times increase exponentially instead of linearly as the dataset increases.
This is especially noticeable with poorly written batch jobs. The job might take 1 minute to process 1,000 records, but it does not necessarily follow that processing 100,000 records will take 100 minutes.
For an example of an application that seems to have implemented Shlemiel the painter’s algorithm, take the Replace function in Notepad. Performing a find-and-replace on a small file is very fast, but performance quickly degrades as the file gets larger.
As you can see from the graph, the increase in response times is definitely not linear as the file size increases.
Using an even larger file containing 10,000 lines, the operation completes in 28 minutes. Compare this to the response times for another text editor such as TextPad – all files up to 10,000 lines were processed in less than 1 second.
If you would like to experiment with this feature, text files of different sizes are available here (155kB zipped).