Managing, identifying & assessing Performance Requirements

The aim of this post is to outline how to determine and prioritise the key performance requirements within a project.  I’ve already covered how important it is to have good performance requirements (Click here).  These are the items that drive and determine the quality of the performance testing – but actually how do we best manage, assess and identify performance requirements?

Managing Performance Requirements: 

Lets take a step back first – I’ve often found that the person that best defines the performance requirements is usually the performance tester. This is in contrast to the business analysis or the stakeholders defining them.  Why? A number of reasons – the main being time and accuracy.  Here’s a typical conversation:

PT: What is the load you want me to replicate on the system?

BA: 1000 users

PT: 1000 users doing what?

BA: 1000 users doing actions A,B,C,……X,Y and Z

PT: Thats too many, which are the most critical? (Discussion ….)

….. BA goes to find out and confirm

PT: And at what transaction rate?

BA: All at the same time.

PT: No, (explains transaction rate) – can you tell me the rate of actions x,y and z over an hour … peak and normal?

BA: No – I have to get back to you ……

….. two days laters

BA: Such and Such says we need to simulate  X,Y and Z at this rate …

PT: Can you explain to me what X does precisely

BA: I’m not quite sure, I will have to find that out from such and such

The above discussion is hypothetical but not untypical of the conversations I have.  What it attempts to illustrate is the time lag and 3rd party knowledge required when gathering information.

If I leave a BA or any other stakeholder to write a document containing performance requirements and collate all the statistics it is likely to be inaccurate, imprecise, structured incorrectly and also result in a lengthy delivery time.  The fundamental issue here is that performance testing is specialized and the Performance Analyst knows exactly what is required and how they want it delivered, however the BA doesn’t.  So I take control – I talk to the BA, I write up a summary document and I ask them for the contacts I need to speak to (Notice how many times I used I).  This means I can get a more precise set of requirements directly from source and with much more speed. It also means I can build up a list of valuable contacts – so when I have a query I can bypass unnecessary layers.   It also means I can start building an understanding of the business requirements and validating the performance requirements. So taking control of the document and being able to directly engage with the stakeholders is a key aspect.

Identifying Performance Requirements 

So how did we identify a good performance requirement? I had a small team of performance testers on a client site – we had 140 developers in 10 Scrum teams delivering into a single product release (once a month).  Each of these teams had a ‘Technical PM’ – each with varying amounts of technical ability. Here is an example of some of the requests:

  1. We have changed a icon on this screen  – we want you to performance test it (honestly, I’m not kidding)
  2. We have a new drop down box on this screen – please performance test
  3. ….etc

There was a request to performance test every new piece of functionality from some TPM’s.  So I sent this set of guidelines and questions out to filter out the number of requests that were entering my team:

  1. Is new introduced Generic Viagra functionality introducing significant architectural change?
  2. What is the transaction rate that the expected change is estimated to be enacted per hour?
  3. How business critical is the new change?
  4. Can this be performance tested in isolation or do we require the whole system built together to performance test?

Using the above guidelines we could speedily identify changes entering the system that were subject to performance risk. We could then prioritize the requirements that required further investigation and warranting performance testing.

Assessing Performance Requirements

When assessing performance requirements I prioritize according to the following broad categories:

  • Business Critically:  How business critical is the flow to be executed
  • Frequency: How frequency is the flow enacted over a typical period
  • Architecturally: How complex is the flow ‘under the bonnet’.
  • Isolation: Can a developer test this in isolation without the whole system

Its worth saying a little about each of these:

Business Critically: Always identify those items that are critical and key to the business – just because something is business critical doesn’t mean it needs  to be performance scripted.  If a critical business flow is not performance tested we should be able to evidence that it has been considered for test show the reasons it hasn’t been tested.

Frequency: How frequent are the actions – if this is enacted a small number of times then we can consider testing manually while generating load on the system. Sometimes manual is faster, more convenient and easier.

Architecturally: Talk to the developers, architects and DBA’s about how complex a flow is under the bonnet.  There have been numerous occasions when an item has been de-risked because it is similar to another piece of functionality or there is no perceived technical risk.  This is the one category that is most often overlooked.  An analogy is the indicator on a car – this is critical but mechanically simpler than turning the ignition. Assessing things architecturally enables more intelligent and targeted performance testing.

Isolation: Can a developer test this in isolation without the need for specialist performance tools? e.g. Can a JUNIT test be run in parallel before entering performance testing.  Where possible, performance testing should take place before the system integration phase and by developers if possible.  This also significantly reduces project risks.  Everyone is responsible for performance testing, guidelines can be given where required and the performance team can sign off.

By taking a combination of these factors – you can then begin to prioritize which performance requirements are going to be targeted and delivered within a build

Key Takeaways: 

  • Taking control of gathering performance requirements enables the Performance Tester to quickly get a more accurate picture of what needs to be tested
  • Getting involved with Performance Requirements also helps gain an understanding of the business, this means incoming requirements can be validated. Validation is more important than verification.
  • Always prioritize according to a combination of the 4 categories, don’t overlook the architectural complexity
  • Look for load test items that can be pushed earlier into the SDLC

See also:

Why Performance Can’t be Guaranteed   

What Make a Good Performance Tester

The Core Performance Lifecycle Phase

The Performance Reporting Stage

Performance By Design – an Agile Approach

Note: It may seem a little odd that I haven’t talked about metrics and SLA’s. Metrics will naturally fall out when the performance requirements are being assessed.  They are a by-product of the overall process. SLA’s are artificial and subjective – spending too time attempting to define SLA’s around metrics is wasteful. Report the metrics and then decide with stakeholders if a product if fit for release.

One thought on “Managing, identifying & assessing Performance Requirements

  1. Nice article, normally I have seen that hardly anyone knows business criticality of the functionality in the case where you are developing it for the first time.If you have some version of the product already deployed in live, then people can provide some estimates. Otherwise everyone thinks each functionality is critical,so they want it to be tested. Second reason what I have seen it that release management wants go by rule book, they have set of rules which they expect every product owner should follow, you see they want each release should have signoff from UAT/QA/PT etc etc teams.Sometimes it so happens that volumes are very low,architecture is simple,yet they fear that environment complexity might impact performance,so they want it to be tested,In some cases dev teams dont really know any thing about performance,so they expect us to test it even if it has very very low volumes.Yes I have tested application where number of hits were around 100 per day and have found major issues(both functional and performance).So normally now a days I consider as what value I can bring on the table in case I have a bandwidth.If dont have bandwidth, I might ask for extra hand.Your Isolation point is nice, I really like it.Never thought about that before.

    I sometimes really think that push and pull nature which comes with scrum might really kill scrum.

Leave a Reply

Your email address will not be published. Required fields are marked *