Performance Anxiety


By Qual IT | 23 April 2013

Performance anxiety

Athletes were not the only ones facing performance anxiety in the 2012 London Olympics. Providers of the event’s website, these days almost as important as the television coverage of the event, had to be able to cope with up to one million unique visitors per hour.

“Performance testing” i.e. determining how stable and responsive a computer system will be under certain loads, was an essential part of planning for the Olympics website. Not all of us have to cope with such Olympian challenges, but performance testing still has a role in many information technology projects.

This is well understood by business people and IT professionals alike. The real question is why we see spectacular systems failures at regular intervals?

Why performance test at all?

First we have to ask why we need to performance test at all. The most compelling and most obvious reason is the need to provide a good user (and often customer) experience.

If a system’s performance slows or even crashes, there can be significant impacts on levels of customer satisfaction, which usually affect your brand’s reputation and ultimately results in lost revenue and additional costs. Even if your customers are not interacting directly with a system, impacts of poor performance on sales or customer support can be devastating.

Consequently it is not that hard to build a business case for performance testing a significant system.

The reasons systems fail are also usually relatively apparent. Servers may lack the requisite memory or grunt to store and process a high volume of transactions. A network may be too slow to cope with a large volume of traffic. Or perhaps a database simply isn’t able to scale to meet huge numbers of calls on it.

So why do we see these failures at regular intervals? It is for the same reason some athletes fail at the Olympics – they have done the preparation but not accurately anticipated the conditions that will prevail in real-life.

When to use it?

Performance testing is laborious, complex and therefore requires investment. Not many organisations need to, or can afford to, use it for all systems.

Although your appetite for risk may vary, a good rule of thumb is estimating what the impact on your organisation would be if the system was completely unavailable for 30-60 minutes? Calculating those losses (and associated impacts) will make clear what the return on an investment in performance testing might be.

For example, Qual IT recently completed a project for a company that had sold a software-as-a-service application to a large US-based company. The system was going to cater for a much larger market, and the new customer wanted to understand the potential impact on page responses, transaction processing times and at what point the system might “break”.

The return on investment in performance testing was relatively easy to develop.

How is it done?

There are a few questions to think about at the start of any performance testing project:

  • Clearly identify what is actually in the scope of the testing. What core systems, dependent systems, interfaces and so on are included.  It’s also useful to have a good understanding of the differences between the test and live systems to help analyse the results and mitigate risks.
  • Understanding the number of different user interfaces involved and how many users will be simultaneously using the site at any one time.
  • In the test system, can you reproduce accurately the mix of hardware and software e.g. servers, operating system and network appliances, that will be used once the system goes lives. Small differences can have major impacts on performance.
  • What are the most common paths users will take through the system and are there areas that present unacceptable risk to your project.  Identify these areas early so you can be sure they get covered in your performance tests?
  • Are there any back-end batch processes that are executed while users are accessing the system and therefore need to be included in the tests and factored in?
  • What sort of load scenarios do you want to test, just normal, peaks or a mix of both?
  • Confirm whether other systems are likely to be impacted by the performance testing and take steps to mitigate any risks associated with running the performance tests.  This often means executing the performance tests after hours or with a slow ramp-up of load so issues can be identified early.

This upfront planning is critical to successful performance testing. As is a willingness to adjust your approach as you go.

The approach to performance testing is relatively consistent across any testing practitioner. Volumes expected are modelled from statistics gathered from the normal operation of the system or established using expected outputs once the system goes live. You need to understand these statistics to build the models appropriately. It is easy to get this wrong and not stress the system enough or, sometimes worse, to overstress the system well above the actual production load.

Initial assumptions about statistics can easily be wrong, as unanticipated events put additional stress on the system. Getting in early and monitoring all the way through development and into production means you can adjust your approach.

Once you have the model you can create a simulation that can be used to test the system under load at average and peak levels over a variety of periods using different populations of users.  You may even want to performance test beyond peak levels until the system becomes unusable or breaks. To automate the tasks there are a range of automated tools that can be used.

Independence has value

Automated tools are essential to performance testing, but tool vendors can overstate what can be achieved, and underestimate the value of technical knowledge and skills.

Securing an independent view is important in your evaluation of automated tools for performance testing. Every tool is different and has different limitations which you need to understand before applying.

Having invested in expensive tools, organisations can become blind to these limitations and apply them ineffectively. Being tools ‘agnostic’ is important to assessing what works best for a specific project.

Not only do you need the ability to develop a strategy from an independent view, you then need to be able to secure resources to implement and train staff to use whatever technology approach you choose to take.

There is also value in having someone who can come in and ask fresh questions. Look at your systems from all angles and explore things you may not have anticipated. External assistance can range from simple mentoring and advice, through to a complete end to end service. Your organisation can provide a broader view, as well as develop internal capability for future performance testing.

It’s all about experience

Performance testing is a specialist endeavour, and good performance testers have a broad experience of different types of systems and situations.

Preparing and running tests is relatively straightforward. The real challenge can be both setting up a realistic test and interpreting the statistics that come out of the tests to ensure you really understand what is happening. It is the ability to build this model and really understand the statistical data that makes the difference to the success of performance testing.

Building your baseline statistics from the production environment, and discussing those with the customer, is essential. They are the experts on what is likely to happen.

Another key is working with all of the different people in the process, from business analysts to programmers to functional testers, to understand how the system was developed, why, and what it is designed to deliver. Unless you understand that well it is hard to interpret test results.

Performance tests only provide a picture of the current status of a system; you need to have a broad view to understand how changes might impact on different aspects of the system. Like any modern system development the performance testing challenge is an agile one.

Avoid performance anxiety

Like the athletes who succeeded at the Games, the Olympic website was well prepared, extensive simulation of the real-world environment was undertaken, and ongoing performance was monitored.

And it’s likely you heard little ‘noise’ about the performance of the website. That’s because it all went smoothly. Just the kind of silence any good performance tester loves.