HP Ajax TruClient – A First Glance Review
I’ve just seen a video of the HP TruCient. Wow! I’m not a fan of HP – but this looks like an impressive piece of software engineering. I’m pleasantly surprised that HP has given the space and room for an internal team to attempt something so innovative (for HP*). I’m going to pull a few threads together and attempt to make an educated guess at what is actually happening:
The Addressable Performance Problem:
Client Browsers are getting more sophisticated; they are becoming OS’s in their own right. The traffic that occurs between client and server is no longer simple. HTTP requests are being evaluated and made dynamically and I’m increasingly seeing more client side requests being determined by complicated JS code within the browser. The result is a headache for the performance engineer – and in turn I’m seeing performance engineers compromising and approximating this traffic, not truly replicating. This all means scripting becomes complicated, long winded, prone to error and not reflective of true behavior…. and, from what I can see, this is going to become more complicated (an example of this is SPDY protocol).
HP TruClient effectively embeds an actual browser and uses object recognition to replicate the actions of a user. It looks like the TruClient technology records the actions and then generates traffic based on those actions. I would compare this more to an Automated Test Solution (Think QTP, Selenium, Robot) than a network level protocol. And I would describe this more as an object level protocol than a network level protocol.
- Scripting becomes easier, much faster and more accurate (to the actual user behavior)
- The need for specialised Performance Engineers is greatly reduced (not great for PE’s!)
By using object level recognition, scripting is detached from the network level protocols – Jquery requests, dynamically generated JS requests and CDN level traffic doesn’t have to be thought about.
The Potential Pitfalls:
- Memory footprint – It’s going to be much larger using this approach. If you are inside the firewall I imagine you will need a lot more server horsepower
- Scaling: This type of technology tends to have difficulty scaling above 5k virtual users …. I would be interested to know how many VU’s people have successfully scaled to and with how many servers.
- Network Filtering: Being able to filter on specific hosts is desirable – If I scale to X thousand users I want to be able to omit calls to Facebook, Google, Tracking and a plethora of other hosts. I’m not quite clear if this is possible using this method.
- Transaction Timing of Specific Calls: Performance Engineers can wrap timing devices around specific calls (e.g. Ajax) and measure to a fine degree of granularity – I can’t see how this is possible using this approach.
Footprint and filtering have workable solutions (I can think of a few), so with a little more effort these will be worked around.
To me, HP Ajax TruClient doesn’t look like a natural fit within Performance Center, a little out of place. However, this is an impressive attempt to address two of the biggest issues in the performance testing space – the dynamic nature of requests made in the increasingly fat and complex browser clients and the increasingly complex nature of scripting effort. With the pitfalls I suspect this is more suited to a cloud type environment where the elastic nature of computing resources can be leveraged to overcome the potential pitfalls outlined. I think the overall concept is great for future proofing as the limitations of the current methods become more apparent (more on this in another article).
I also hope HP continue to recognize and leverage the talent within the organization and continue to create new products and compete – rather than go on unsustainable shopping sprees and buy companies for a much higher premium than they need to.
See Also: Swaraj Gupta Review of Ajax TruClient. He’s a little more enthusiastic about the technology than I and has a more in depth review of the features.
*Check out LoadStorm.