I just found something interesting on Google Video; it is a 1 hour Google TechTalk presentation by Goranka Bjedov about Using Open Source Tools for Performance Testing.
Goranka has some interesting things to say. She makes the point that there is really no standard terminology in performance testing circles, and goes on to prove this by giving her own definitions of performance, stress, load, scalability, and reliability testing. As an example of reliability testing she notes that “typically, when I was at AT&T, we would run for about a month at a time after everything was done just to find out that the system can actually stand up and can work fine with the load for a prolonged period of time.” In my testing circles, we would call that a soak test, but I would have been interested to hear more about the types of systems she was testing at AT&T.
The main body of her talk is about the different tools available for load and performance testing. These can be broken down into in-house, vendor and open-source tools.
Google has 55 in-house load and performance testing tools that have been developed by different groups for testing different Google products. These are very expensive to maintain and may only be used in-house, which makes any benchmarks impossible to verify by a third-party. Goranka says “Before you decide to develop your own, please take a look at what is out there…”
Goranka slams vendor tools (like LoadRunner, SilkPerformer and WebLOAD) for being overly expensive and using proprietry scripting languages. Personally, I have always thought that it was pointless having to learn another language just to use a load testing tool. Unfortunately she uses LoadRunner’s scripting language as an example “it’s C, minus the pointers”, and is incorrect – unlike many other tools, Mercury uses standard C (and Java and VBScript).
Her recommended solution is the open source tools – “five years ago they just weren’t there, but today they are.” Her personal preference is JMeter, but also recommends OpenSTA and The Grinder. Open-source tools have the advantage of being a good price, and having source code available; she also makes the point that they use standard programming languages for scripting (although this is incorrect when talking about OpenSTA).
The disadvantages of open-source tools are that they have a steep learning curve and do not support many protocols. “The vendor tools support far more protocols than the open-source tools, but as long as you are staying in the web space, and your looking at HTTP/S, IMAP and POP3, the open-source tools are pretty good”.
Goranka does not say that the open-source tools are free because it is occasionally necessary to write code to extend their features. “Free software is free in the sense that a puppy is free.” Features that Google engineers have written for JMeter have been added back to the main code tree by the people maintaining the project, meaning that Google is at least saved the cost of maintaining their forked code.
She uses JMeter for testing web-based applications through the GUI, uses The Grinder for API-based testing, and does not use OpenSTA because it only works on Windows.
Other points during the presentation:
- You should use the same monitoring for Load testing that you use for Production monitoring (so you don’t have to account for the differences in load that a different monitoring system will put on the system).
- If you are running Unix-based systems, don’t sustain CPU above 80%.
- Google tracks a summary of every performance test in a central database. The database also contains information on every piece of software that is installed on the machines in the test environment.
- If I am unfamiliar with the system, I don’t trust it. One of the things that I have realised is that
A) the system will fail in the place where they tell me that nothing could go wrong.
B) developers are totally delusional about their own software, and frequently they will just forget about things that they’ve done two weeks ago.
- I run every test 5 times. I want to see that I have some sort of statistical consistency.
- Performance testing should not be used as a tool to find memory leaks; but it can.
- Performance testing without monitoring? Don’t bother. Why waste your time?
- If you are going to do any performance testing, make sure that database sizes are somewhat realistic. They don’t have to be exactly the same, but they have to be the same order of magnitude otherwise the results you are getting are completely off.
- Execute a stress test. Find how your system is failing. Find where it is failing. Do find out how the system handles overload. There are no good defence mechanisms against people out there, and you can’t predict sudden popularity (eg/ Google Earth).
- Start a test after a decent warm-up period. Don’t start 100000 users all at once.
- Quite often people don’t know about everything that is running on a complex system. Maybe there is a low priority process that is running with high priority. This can usually be fixed by niceing the process down. Quite often there are debug things that are still running also.
- Monitor the machines that are collecting the monitoring data and the load generators (not just the system under test).
- Performance Testing and QA is about risk analysis. If I believe it is high risk, I want to take a look at it.
- When I am doing performance testing, the first thing I try to do is eliminate the network. I want to simplify my problem. I am interested in the machines, and my hope is that the network provided will handle everything I need. Once everything is profiled and understood, we will do some tests that include the network. If you can, put everything on the same subnet and same switch. It will make you a much happier performance tester in the first pass. Debugging networking problems is (not) fun.
- (When talking about testing on smaller sized systems than Production). You can’t test on a 386. Extrapolation will kill you. You will run out of some resource that you never expected, and you can’t predict this ahead of time. For final validation, you really want to get some time on the Production hardware before it goes live. If the system is not being used for Production, it should not be that hard to get hold of it for a week or a weekend.
- Find more open-source performance tools at opensourcetesting.org
There is another summary of her talk available on Robert Baillie’s blog.
You might also want to have a look at Becoming a Software Testing Expert; a 1 hour presentation delivered by software testing expert, and author of Lessons Learned in Software Testing, James Bach on June 13, 2006. His presentation is available for download from his website.