I last gave a high level introduction how HP Loadrunner as a tool fits into the Performance workflow. In this post I will go into more detail with regards to the implementation of the Loadrunner scripting.
The objective of the test implementation phase is to create a program which will emulate the planned user activities. For example in case of a web application the user will visit pages of a site, will interact, wait some time etc. This behaviour will make up the workload model (as described in the previous post).
The recorded script will perform requests on behalf of the user. For a web based applications this will broadly break downs as follows:
1. Composition of HTTP requests (including HTTP operation type, considering HTTP parameter data, cookies, etc.). You may want to execute your tests with different entities, such as selecting a number of different product in a webshop. To facilitate this, Loadrunner maintains parameter data read from a static data file (parameterization), or parameters can be extracted from previous HTTP responses (correlation). This is vital to maintain a consistent user dialog session.
2. Sending the HTTP requests to the system under test and receiving the responses. Loadrunner features a number of commands to achieve this. There are variants, which will emulate protocol-level traffic (such as HTTP GET or POST requests), and there are also variants that better describe the semantics of the performed user action (such as simulating a click on a web link, or an on image). Both result in the same functionality, however they have different readabilities.
3. Checking if the received response is a valid response (validation). The script should check if the response matches the state of the system you expected, e.g. after adding a product to the shopping cart has actually appeared in the cart, and not resulted in an error message. The attention you invest here will be rewarded with a much more precise test result, if also helps to avoid errors further downstream.
4. Action upon the response, e.g. continuing or choosing a different navigation path. This helps to create context-sensitive testcases.
5. Delays: Scripts should have “think times” or delays to emulate the timing of users. It is an important step which can sometimes be disregarded. Scenarios without adequate think time will distort the load testing model considerably.
6. Timings: This needs to be done to support analysis and evaluation. Loadrunner facilitates this with user transactions. Requests for a certain user activities can also be grouped into one or more transactions e.g. If a user visits a page, which results in AJAX calls, these HTTP requests can be grouped and presented as a single operation during result evaluation. The user scenario will then consist of a series of user transactions (as opposed to separate HTTP requests), which are easier to evaluate. It is also possible to implement transaction groups, which contain sub-transactions. This aggregates the metrics of sub-transactions and allows the ability to analyse them separately.
In this post we covered the basic activities of a Loadrunner script. I will next detail a walkthrough with the steps required to create a Loadrunner script.