In this post I will describe the most notable activities you need to create a successful Load-runner script.
As an example, we will refer to a simple web-shop, which features search functionality, where customers can search for products by typing in the product’s name. From the search result they can select the appropriate product and navigate to the product details page and then checkout.
1. Choose the appropriate protocol-specific test type in Loadrunner Virtual User Generator (VuGen) matching the application under test. This selection will activate a protocol-specific toolset in VuGen, which enhances productivity.
2. For certain protocols (e.g. web and webservice) Loadrunner VuGen eases script development by capturing network traffic between the client and the server part of the test system and generates script code accordingly.
How does this work?
- When testing a website the browser is started and the url of the application under test called.
- As you perform the test manually via the browser, the network traffic is recorded and analyzed by VuGen. (Note: You do not need to change the configuration of the browser e.g. proxy settings).
- The recorded network traffic is then converted into a C syntax or in Java programming language.
- All the actions carried out in the script are called a user iteration. Simply speaking: the expected load will be achieved by repeating this user iteration during the test execution.
3. The recorded script in most of the cases needs to be customized. This usually comprises of the following actions:
- Eliminating irrelevant network traffic e.g calls to third party systems, like Google Analytics.
- Setting up transaction boundaries to slice the script into logical units. (e.g. you may want to see the metrics of the steps “visiting homepage”, “search for product”, “product details” and “checkout” separately)
- Breaking up the script into multiple functions: As all the recordings are generated into one function, breaking it up into separate function is required, especially if you have recorded a long, multiple-step scenario.Following our webshop example, it is would be pragmatical to decompose the scenario into functions like homepage(), search(), product_detail() and checkout().
- Parametrizing the script.Test data (such as user accounts, product ids, etc.) can be added to the script (Loadrunner provides an editor to configure your testdata and its behavior.) Then placeholders are to be put in the request commands (e.g. replacement of URL parameters). In our webshop example the search terms could be stored in a parameter list, from which one is picked randomly (or in sequential order if needed) during each user iteration, and substituted into the search field during submission.
- Correlation of requests to maintain a consistent user session.In our webshop example script we might start by querying products upon their names, however in the subsequent steps (like product details page) we would have to use the product IDs in the urls. We would not want to maintain a redundant product name and id coupling in our test data.ÃÂ This means the best approach is to extract the product id from the search result’s HTML response (practically from HTML link urls). This is difficult and prone to error. Loadrunner unfortunately does not support regular expression patterns for searching in the response data. It does support the extraction of literals bounded with given strings (left and right match). This is sufficient for most of the cases, and no additional extraction logic needs to be implemented. If the HTML markup is written in a suboptimal way you will have to talk to the development team about the issue e.g. requesting an easier digestible markup format straight from the DB.
- Validation of responses: The script should check the response matches the expected state. Errors in the recorded script (e.g. malformed urls) or test data issues (e.g. type problems, or referencing a non-existing business entity) can easily affect performance metrics and therefore produce a distorted result.
- During development of the script the author can debug and check the responses.ÃÂ However this is not an option when the script is executed during a load test. Moreover, application errors and potential bugs may only appear during concurrent situations (thread-safety problems, memory consumption problems, overload, etc.). This means it is critical that the script validates as many responses as possible.ÃÂ The goal is not just to report those (with the description of the problem) in the error log, but also represent them in the results as failed transactions. Transactions with failed status are separately evaluated from the passed ones, therefore the metrics of the passed results remain intact. The number of failed test cases is important during the analysis.ÃÂ The basic error detection for web protocol is to check HTTP response code. Loadrunner automatically detects if the HTTP response is Internal server error (HTTP-500), and fails the transaction.ÃÂ Most web applications display customer-friendly error messages, either for user errors (field validation issues) or technical errors (problems with the infrastructure). The Loadrunner script therefore has to look for these error patterns in the response. You can search for the appropriate patterns in the response with the same toolset you use for correlation (see above), and fail the transaction if something is considered to be incorrect.
- Add think time: Real user requests are not coming out from a machine gun (users pause to read the content of a webpage, type data into the forms, etc.) Loadrunner Virtual Users need to keep correct timings. Not considering this in your model will lead to incorrect results. The Web protocol recording by default does generates pauses into the code, so all you have to do is to review these and adjust them as required.ÃÂ I shall talk about Ã¢ÂÂpacingÃ¢ÂÂ of Loadrunner scripts in a later post.
4. Set the runtime configuration to adjust how the script is executed:
- Logging: The level of logging detail can be set, or turned off entirely
- Pacing: Defines how Loadrunner should start the iterations of the script. As this is a topic of its own, you can read more in detail about this in an upcoming post.
- Additional attributes: Ability to set the runtime configuration of key-value pairs. Here you will configure items such as host names, flags or parameters which govern the execution of your script logic. Unfortunately it is not possible to override these parameters within the Performance Center main controller.
- Network bandwidth emulation: models the user’s maximum available network bandwith to the system under test.
5. Replay:ÃÂ Execute the script to ensure it runs correctly. The Loadrunner VuGen user interface helps you with the following features:
- Step-by-step script debuging / using breakpoints
- Checking parameter values during execution
- Monitor execution log. You can also increase the log’s granularity by setting Extended Log options (to track data returned by the server and to see parameter values during substitution)
- Test result visualization (View/Test Results shows the response data, e.g. the rendered web pages, including passed and failed validations)
- Tree view:ÃÂ This breaks down the script to a functional level and shows the network traffic for each step (and displays the rendered web pages). It also offers a side-by-side view to compare the response data received during the recorded session and during playback.
6. Upload:ÃÂ The script can now be uploaded into Performance Center. This step ensures the script is being shared across the team – and is ready for execution in a load test.