Since the release of Performance Center 11.0, the product has been based on the HP Application Lifecycle Management (HP ALM) platform (i.e. Quality Center). This architectural change means that functional testers and performance testers can share a combined testing platform – a single repository for requirements, test cases, and defects – giving full visibility across the functional, non-functional and performance testing areas.

It sounds great, but there are pros and cons to combining Performance Center and ALM/Quality Center…

Prior to version 11.0, Performance Center was completely separate from ALM/Quality Center. As companies upgraded from Performance Center 9.52 to version 11.0 or 11.5, they had to make an important decision: should they maintain separate instances, or combine them?

Some customers are still using Performance Center 9.52 and haven’t had to make the decision yet. Choosing which deployment pattern to use will have a big impact on the users of these tools, and should be carefully thought through.

QC/ALM users and Performance Center users have different needs

Quality Center/ALM Performance Center
Large number of users
e.g. 500 accounts, 250 concurrent users
Small number of users
e.g. 20 accounts, 10 concurrent users
Downtime is unacceptable. Functional testers use the tool 24×7 (onshore/offshore testers), and an outage will affect a large number of users. A big outage (e.g. half a day for an upgrade) must be planned months in advance and rehearsed in a non-production instance. Downtime can be negotiated, as there will only be a small number of users at any time, and performance testers are more easily able to move their test execution windows to accommodate downtime.
Upgrade cycle is quite slow. Feature set is stable and mature. Users would prefer to be using a 3-year old version of the product than risk minor changes. Upgrade cycle is much faster. Support for new protocols is being added with each release. Many users want to use the current feature pack so they can test the latest version of SAPGUI or Silverlight, etc.

The main problem I see is that performance testers desperately want to upgrade to the latest version, but the people who manage ALM don’t want to do it as the functional testers have no compelling reason to want an upgrade, and are highly resistant to any change that may cause downtime.

Many companies who own Performance Center are choosing to maintain two separate instances of ALM – one for performance testers, and one for functional testers.

Synchronizing data between Performance Center and ALM

The obvious solution to the problem of meeting the needs of both performance testers and functional testers, while still giving a unified view across both testing streams, is to have two separate instances, but synchronize the information between them. This option seems to have all of the benefits with none of the drawbacks.

HP provides a tool called the ALM Synchronizer that sounds like it should help. In theory, it should work like this:

Performance Center Synchronizer

Unfortunately, the Synchronizer tool does not support Performance Center. Here is the relevant section from the HP ALM Synchronizer v1.40 Installation Guide:

Defects synchronization between two HP ALM 11.00 endpoints is not supported for HP Quality Center Starter Edition, HP Quality Center Enterprise Edition, or HP ALM Performance Center Edition.

HP (or an HP partner) should urgently work to fix this, as a lot of Performance Center customers would benefit from the ability to synchronize data between HP ALM/Quality Center and Performance Center.

Update: in a comment below, Ray mentions that Tasktop Sync might be a solution for this problem.

It’s time to decide!

Lots of HP customers are still using Performance Center 9.52, so haven’t had to make the choice of whether to combine their instances or keep them separate. Time is running out for them to make their decision, as they will have to upgrade to an ALM-based version of Performance Center before the end of 2013.

The End of Support date for Performance Center 9.x is December 31, 2013. As of this date all customer support activities for this version will cease, this includes:

  • Telephone support
  • Security Rule updates
  • Product upgrades

Performance Center 8.1x has reached end of support per June 30, 2008.

[attachment url=”https://www.myloadtest.com/wp-content/uploads/migrated-resources/performance-center-9x-eol-customer-letter.pdf”]Performance Center 9.x End of Support customer letter[/attachment]

If you work at a company that owns Performance Center, and you have an opinion on the pros and cons of merging Performance Center and ALM/Quality Center, please share your experiences in the comment section below.

 

Published On: August 1, 2013Tags: ,

9 Comments

  1. Stuart Moncrieff August 3, 2013 at 3:04 pm

    Betteridge’s law of headlines states that “Any headline which ends in a question mark can be answered by the word no.”

    • Natarajan October 5, 2013 at 5:28 am

      Hey Stuart, I have a clarification on Performance Center 11.52.

      From Installation guide, i got to know that “Recording VuGen scripts is supported on hosts on 32-bit machines only”.

      We have bought PC11.52 with Windows 2008 R2 Std 64 Bit SP1.

      Does Vugen supports recording here? If not, do we have any patches from HP that supports this?

      Please help me in understanding this.

      Thanks!!!

  2. Con August 4, 2013 at 2:04 pm

    We have a “worst of both worlds” situation at my company. We are stuck on PC 11.0 because they won’t upgrade ALM, and we’re not even using any of the features that made the them decide that they should be combined. Performance tests are in separate projects to the functional tests. There are only 2 big projects that contain all the scripts and scenarios for all the performance testers in the whole company. Requirements aren’t linked to test cases, which aren’t linked to defects. It’s a poorly thought-out mess that has all the drawbacks you mentioned, and none of the advantages.

    • PC Admin August 9, 2013 at 12:47 pm

      I totally agree. And just because it is a unified platform doesn’t mean that the same person should be managing both parts. Different skills are required to manage Performance Center effectively.

      Also, something that should be added to the table in the post is that the deployment needs of ALM/QC and PC are different. ALM just needs the user interface to be accessible from everywhere in the company, but Performance Center must be able to generate load (and monitor) servers in all areas of the network. This is much more complicated, and makes the network location you deploy it really important.

      Also, functional tests usually have clear pass/fail criteria and should be linked back to a requirement. Performance tests are much more fuzzy, so this is difficult.

      Nice article anyway.

  3. Ray Ffrench August 7, 2013 at 1:29 pm

    You make several good points and from my viewpoint I think there are plus/minuses to be added up.

    Some of my thoughts

    • The sharing of requirements is of limited value, as the functional testers don’t care what the non-functional/performance testers are doing and vice versa. Coverage linkage reports (feature of QC) would be only seen by each testers type (i.e. filter for functional or filter for non-functional).
    • Executive (cross project) and Snr Test Managers are the main consumers of functional/non-functional reporting and hence this audience is limited. Given that the audience is small, could this reporting need be done elsewhere using “Business Views” (i.e. Visual SQL extractor) into Microsoft Excel.
    • Common ground is the defect which everyone cares about and hence your idea of sharing them is valid. Other tools can do like tasktop sync as its agnostic (it can sync between two ALM endpoints whereas HP Synchronizer will not).
    • Performance testing drives the protocol layer and hence is linked to changes and variants in that layer. These change often and are highly technical, hence the performance tester needs updates and patches more frequently. Functional testers are NOT connected to the protocol layer and not exposed to “nitty gritty” changes and want stability over technological changes.
    • Given the “Grey” nature of the success of a performance test, particularly when attempting “performance optimization”, the pass/fail status is not usually linked to completion, where functional testing is “off to the pub” when they see green lights and “pass” statuses. Reporting performance testing the same way as functional testing has this limitation, as many many runs could be completed before any test is really completed. Even goals for a performance test may change mid-stream depending on results whereas functional doesn’t get subjected to this constraint.
    • Due to changes in methodology (agile approaches), and the environmental pressure that is applied to the performance test world late in the test cycle, the environment/configuration and hence changes needs of the performance tester in greater than the functional tester, thus aiding the case to separate out the ALM/PC testing space.
    • Central reporting on ALM/PC is a definite plus for a merged platform, but again the reporting needs are “pass/fail” and rarely does anyone drill into detailed Performance test results. The current product doesn’t produce std reports across functional/non-functional so this is always custom anyway.

     
    My 2c

  4. Stuart Moncrieff August 7, 2013 at 2:48 pm

    I think that this could be partly an issue around framing – influencing how people think about an issue/concept/software tool by the language you use. Here are some examples:

    Sounds good Sounds bad
    a standardized approach a “cookie-cutter” approach, “one size fits all”
    centralized competency area, specialist group, center of excellence organizational silos, a group having a monopoly on a service
    a repeatable process, a well-planned process a rigid process that is not adaptable
    a clear process, with well-defined stage gates micro-managed, focusing on tasks rather than outcomes
    a unified solution, a single solution unable to choose the best tool for the job
    all in one place monolithic
    an agile, responsive approach an unplanned, chaotic approach
    “best of breed” software selection the system is made up of disparate components, nothing is integrated
    customized, tailored for your unique situation non-standard, not maintainable, not re-usable

     
    Language is important. Is “unification” always a good thing? Should everything be unified? If I was designing a new house should I be “unifying” the dining area and the bathroom? (sounds efficient!)

    Sometimes the most important factor in getting a decision accepted is how you describe the solution to the decision maker.

    • PC Admin August 9, 2013 at 12:35 pm

      When you talk about framing an idea so that it sounds great to the decision makers, you are actually cynically exploiting a management anti-pattern that is common in large corporates.

      It is bad when the decision maker is not the same person who will be using (or administering) the new system. This is how big companies get lumbered with crap software that no one likes using.

  5. Andrew Sliwkowski August 18, 2013 at 9:43 am

    I am assuming that there is value(Throughput) when Quality of Service Engineers (QA + Performance + IT + Business + Customers + Customers Customers) can quantify that the investment into ‘Quality’ adds value to the product?
    In my experience qualifying this value requires a synchronization towards a goal.
    As a 13 year Loadrunner and 10 year QTP engineer (on both pre-production and production) I have seen a focus on reducing Operating Expenses (use freeware for example) , get it into production, we will let the Customer QA it.
    Quality throughout the life-cycle of the product doesn’t magically get injected by any tool(not yet) , and maybe no testing is the best way to produce products?
    Maybe we blurring the lines between Implementation and the Interface ?
    Are there products out there of high quality that we can agree on, and work the problem backwards?

    Cheers/andrew

  6. August860 September 10, 2013 at 11:57 pm

    Kelly’s Law (borrowed from Wikipedia):

    “K∝1/A (Knowledge is inversely proportional to apparent authority.)

    As an individual rises in a hierarchy, be it business, government or any human institution, the person tends to gain more and more power to do things, but less and less knowledge of what to do, or in extreme cases of the need to do anything.

    As a result those at the top monopolize power, but have no idea of how to use it; those near the bottom know that things are going wrong, and often have a good idea of what needs to be done, but have no power to do it.”

Comments are closed.