I’m going to write about something that has been increasingly concerning me. The current method of Performance Testing – capture network level traffic, inspection, correlate, and then replay, is becoming more difficult. In the good old days we had simple GET and POST requests. Then Ajax and Web2 came along, making things a little trickier. I’m now seeing a lot of requests being generated dynamically within JS – and it’s near impossible to understand the JS code and replicate the logic in whatever is the performance tool of choice. We also have CDN’s and Magic boxes (e.g. Strangeloop) increasingly being sandwiched between client and the server – what was a static named resource can easily become dynamic from one moment to the next. Do we want to test the CDN’s or not? (No – in the majority of cases).
I see performance engineers having to compromise on the traffic they replicate using their tools. You can argue this has always been the case, but I would say that the approximation is diverging more and more from the actual traffic.
Performance Engineering is at heart of development activity – inspecting the logic of others, decomposing it, simplifying and then replicating. Good Performance Engineers also understand key business flows, architectures, risk and the metrics produced. But as the browser logic become more complex the traditional approach is becoming more problematic.
HTML 5 and specifically items such as websockets (Push events*) are going to introduce a whole new level of complexity. The advanced features of SPDY if implemented will also cause a headache… and I think this is just the beginning. HTML clients are going to get more and more sophisticated – they are becoming fully blown Apps. This ultimately means that the simulated performance behavior will become harder to mimic and digress increasingly from the actual behavior of the system. If the performance engineer wants to effectively mimic the behavior more accurately (and capture issues) then they will have to replicate the logic implemented in the client. This isn’t sustainable as a way forward. Performance testing in this form is climbing an ever-increasing mountain and eventually it’s going to hit the wall.
So what is the answer? I’ll attempt to outline a potential solution because it’s never great to highlight a problem without suggesting a possible way out. HP’s TruClient and the load testing solution LoadStorm while blunt are potential sustainable scripting solutions for the future. They have issues (Memory footprint & filtering hosts being the two major issues), but I think by using a combination of the two approaches these issues could be overcome. Having small proxie agents to filter requests in front of a group of client browsers would resolve filtering of host issues (and provide more accurate measurements). Piping all TCP/IP traffic generated from the cloud directly through the firewall may leverage the benefits of the elastic cloud (Memory and CPU) for testing required inside the firewall. Complex, but solvable even with the latency issues.
So the future? Traditional scripting becomes harder and more specialized. Larger entrenched companies will need to keep hold of their performance engineers as more knowledge becomes locked -up within them. Some companies with switched on engineers will use the traditional approach and target risk points whilst accepting limitations (e.g. Facebook). As the limitations of the traditional approach become more apparent (time to develop, accuracy of simulated behavior) someone will refine and merge the two very different approaches that currently exist into a workable solution for the lion’s share of the market**. And the newer methods will become more prevalent, easy to leverage and accepted as companies move from internal environment to PaaS/Cloud environments. Performance Testing isn’t dead – it’s simply going to get harder and then change.
*Sever send events will come in one form or another – and the types of messages sent to and from the browser using this method of communication will begin to get increasingly complex (encoded, compressed and event driven by user actions) as the sophistication of the browser app increases.
**Companies such as SOASTA could leverage their existing offerings into this type of solution.
Performance By Design (A great solution)