Its important to set an expectation after performance testing has been complete that there may be performance related issues after the system has gone live. This can be an easy or difficult task depending on the understanding the client has. Performance related issues are often high impact, so you can find yourself accounting to roles that are not directly attached to the role you are reporting into when an issue occurs.
Heres the way I attempt to explain it to a QA manager & associated layers:
Every company goes live with known functional defects (not P1’s or P2’s). They will also find functional issues they don’t know about when the system goes live, which will then be hot-fixed. This is a fact of life and the reason it is generally accepted is because there simply isn’t enough time to find and fix every single defect. The Performance testing phase should viewed in a similar light – only when there is a defect it is immediately more visible and its impact is significantly higher.
To further complicate matters there are a high number of external factors that can affect performance of the live system that generally will not affect functional issues:
- OS level configurations *
- DB level configurations *
- Application Configurations e.g. Apache *
- Functional defects working together to create a locking issues which has a knock on effect on performance *
- Malicious attacks e.g. Denial of Service, Penetration
- Unexpected load / peak traffic that has not been included within the workload model
- Services/Components falling over
- Performance Testing against a smaller physical environment than live (See Risks of Load Testing in a Scaled Environment)
- Innocent hot-fixes applied to build
- Unusual load applied to the system from Batch jobs (external/internal)
- New technology introduced that fails on an edge case
Add these together and you have an almost infinite number of permutations that can affect performance. Now this shouldn’t be used as an excuse to protect performance testing and covering a poor approach – if there is a live issue the performance tester should be able to clearly show what has been performance tested, what hasn’t been performance tested (resilience, failover DR) and any associated risks that were signed off.
Good performance testing will give a high level of confidence that the system can go live with the anticipated load. What definitely shouldn’t happen is that there are obvious performance issues under normal load.
- Performance testing will give a high level of confidence the system can go live with the anticipated load
- Performance testing cannot guarantee there will be no performance related issues
- There are a high number of external factors that influence performance of the live system
See also – what to do when live issues occur.
Note: Configuration issues can be largely mitigated, but this depends on the maturity of the build and deployment process. Performance testing every functional test case isn’t possible – but this can also be mitigated by generating load and letting the functional test team execute exploratory testing.